FindArticles FindArticles
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
FindArticlesFindArticles
Font ResizerAa
Search
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
Follow US
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
FindArticles © 2025. All Rights Reserved.
FindArticles > News > Technology

Report: 200 Google AI contractors cut loose

Bill Thompson
Last updated: October 29, 2025 10:53 am
By Bill Thompson
Technology
6 Min Read
SHARE

Google has laid off about 200 contract workers who were employed to train and test its artificial intelligence products, Wired reported. The reduction in staff, which the people said happened over at least two rounds, involved specialists that worked on such systems as AI Overviews and chatbot safety pipelines — roles that go to the very heart of how contemporary A.I. is built and handled.

Those affected and what their roles involved

The workers were employees of GlobalLogic and its subcontractors, not direct employees of Google’s parent company, Alphabet. Many had advanced degrees and did work like red-teaming models, labeling and ranking outputs, tailoring responses with human feedback and moderating edge cases that autodetection algorithms missed. Which is to say, they did the grubby behind-the-scenes work that keeps consumer-facing AI useful and within safety guardrails.

Table of Contents
  • Those affected and what their roles involved
  • Google’s reply and how its vendor model applies
  • Why these jobs matter for the quality of AI
  • A well-worn fault line: contractors vs. full-time staff
  • What’s next to watch as labor and AI oversight evolve
A close-up of a smartphone screen displaying the Google AI Over views interface, with a search result about the importance of bees.

One of the dismissals caught everyone off guard, and even former contractors were surprised when they spoke to Wired magazine about the changes. Two others filed complaints to the National Labor Relations Board, saying they were fired unfairly and under retaliatory circumstances due to disputes over pay and job security. Those cases could establish clearer limits on the rights of the exploding army of AI contractors who do “human-in-the-loop” quality control.

Google’s reply and how its vendor model applies

The individuals were GlobalLogic employees or staffed by one of its subcontractors, Google told Wired, and the employment details are the business of those companies. Google also noted that its suppliers are audited against its Supplier Code of Conduct, indicating that the company could be taking a vendor-driven approach even if the work does support core AI experiences.

GlobalLogic, a Hitachi Group firm, offers engineering and data services to large tech companies. Offloading this layer enables platforms to scale AI evaluation rapidly, but also adds complexity: workers performing high-stakes safety and quality-related tasks frequently lack the pay transparency, job protection and institutional knowledge available to in-house teams.

Why these jobs matter for the quality of AI

Today’s giant models do not simply become safe or useful on pretraining alone. They rely on reinforcement learning based on the scarce human feedback, focused red-teaming and persistent evaluation — tasks typically performed by expert raters and domain experts. This increasing reliance on humans is becoming the vital component of aligning models with expected users’ and policymakers’ input, as noted by Stanford’s Institute for Human-Centered Artificial Intelligence.

A Google search interface on a smartphone with a search query what are good options for a... and a loading indicator that says Finding places....

The stakes are apparent in search and assistant products. AI Overviews, for instance, came under fire after users started receiving strange or incorrect answers, forcing Google to scale back the feature and tweak its triggers. Lowering the amount of human evaluation needed to fix harms can slow repair and put pressure on automated defenses that do not perform well when facing new or adversarial settings.

A well-worn fault line: contractors vs. full-time staff

Alphabet has long used a vast contingent workforce of temps, vendors and contractors, often called TVCs internally, to perform functions such as content moderation, customer support, data operations and now AI alignment. Labor advocates, including the Alphabet Workers Union, have long held that some of the most important work is done by people who receive fewer benefits and less job security compared with full-time colleagues. There were more contractors than full-time employees at Google, according to a widely quoted analysis in the late 2010s and the extent to which the model has taken hold is something reinforced across the company.

The newest cuts also come on the heels of a broader industry pullback and reshuffling toward generative AI priorities. Big Tech companies have cut teams in older product lines while making major investments in training models and infrastructure. But even as spending on data centers and chips soars, human judgment is still a material expense — and also an appealing target for vendors who are under surging price pressure.

What’s next to watch as labor and AI oversight evolve

The NLRB complaints would help define how labor law applies to highly skilled AI contractors embedded inside platform workflows. Regulators are also increasing their scrutiny of AI claims and provenance, with agencies in the United States and Europe looking more closely at how AI features are built, tested or sourced. At the same time, publishers and rights holders are disputing how their content is deployed to fuel AI features, raising legal and compliance exposure that requires more — not less — human expert oversight.

For users and developers, the key question is practical: can consumer-based AI progress safely and reliably without continued investment in people to align it? The layoffs, as reported, also bring to the surface a paradox at the heart of the AI boom: While slashing the people in charge of making models less dangerous is relatively easy to do on paper — and convenient when failures seep into products that billions depend on.

Bill Thompson
ByBill Thompson
Bill Thompson is a veteran technology columnist and digital culture analyst with decades of experience reporting on the intersection of media, society, and the internet. His commentary has been featured across major publications and global broadcasters. Known for exploring the social impact of digital transformation, Bill writes with a focus on ethics, innovation, and the future of information.
Latest News
Meta Has Reportedly Postponed Mixed Reality Glasses Until 2027
Safety Stymies But Trump Backs ‘Tiny’ Cars For US
Startups embrace refounding amid the accelerating AI shift
Ninja Crispi Glass Air Fryer drops $40 at Amazon
SwifDoo lifetime PDF editor for Windows for about $25
Netflix to Buy Warner Bros. in $82.7B Media Megadeal
Beeple Reveals Billionaire Robot Dogs at Art Basel
IShowSpeed Sued for Allegedly Attacking Rizzbot
Save 66% on a Pre-Lit Dunhill Fir Tree for Prime Members
Court Blocks OpenAI’s Use of IO for AI Device Name
Pixel Watch Gets Always-On Media Controls and Timers
Wikipedia Launches Wrapped-Style Year in Review
FindArticles
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
  • Corrections Policy
  • Diversity & Inclusion Statement
  • Diversity in Our Team
  • Editorial Guidelines
  • Feedback & Editorial Contact Policy
FindArticles © 2025. All Rights Reserved.