FindArticles FindArticles
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
FindArticlesFindArticles
Font ResizerAa
Search
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
Follow US
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
FindArticles © 2025. All Rights Reserved.
FindArticles > News

Former DOGE Staff Reveal AI-Guided Cuts Under Musk

Bill Thompson
Last updated: March 13, 2026 6:02 pm
By Bill Thompson
News
6 Min Read
SHARE

Former staffers at the Elon Musk-led Department of Government Efficiency, known as DOGE, have offered a rare inside account of how the agency made sweeping funding decisions, describing an operation that leaned on ChatGPT to flag programs for cuts tied to diversity, equity, and inclusion. Their deposition testimony, taken as part of a federal lawsuit over DOGE’s moves against the National Endowment for the Humanities, outlines a process critics say was both hasty and ideologically skewed.

The case was brought by the American Council of Learned Societies, the American Historical Association, and the Modern Language Association, which argue DOGE’s actions unlawfully gutted humanities programs. Lengthy deposition videos of two former DOGE employees, Justin Fox and Nate Cavanaugh, have since circulated widely, offering granular detail on internal practices and prompting fresh questions about governance, expertise, and the role of AI in policy.

Table of Contents
  • Depositions Detail AI’s Role in Program and Grant Cuts
  • Searches Focused on Identity Terms in Grant Reviews
  • Experience Gaps and Internal Culture at the Agency
  • Budget Targets, Reductions Identified, and Fallout
  • What to Watch Next in the DOGE Lawsuit and Policy
Elon Musk wearing a black t-shirt with DOGE written on it, opening his jacket.

Depositions Detail AI’s Role in Program and Grant Cuts

According to testimony, DOGE staff routinely prompted ChatGPT with a decisive query: “Does the following relate at all to DEI?” The chatbot was instructed to respond with a simple yes or no and a brief explanation, capped at under 120 characters. The resulting labels were then used to help prioritize programs and grants for elimination, particularly within the humanities portfolio.

The employees described this as a way to “speed things up” across vast spreadsheets of awards and proposals. But the description underscores a deeper concern experts have raised about automating value-laden judgments. AI policy researchers have repeatedly cautioned that models can echo the biases in their prompts and training data, particularly when asked to render binary decisions on complex social or cultural content.

Searches Focused on Identity Terms in Grant Reviews

Fox testified that DOGE staff searched grant databases for terms like “Black,” “gender,” “LGBTQ+,” and “equality” to identify potential DEI links, but did not systematically search for terms such as “Caucasian” or “heterosexual.” In Cavanaugh’s account, internal labels like “craziest” were applied to dozens of LGBTQ+-related grants slated for review, a taxonomy likely to fuel claims of viewpoint-based targeting.

Fox further acknowledged that documentaries focused on Black civil rights or on Jewish women during the Holocaust could be marked for cuts on the grounds that they centered specific groups rather than “humankind” broadly. Humanities scholars counter that NEH’s congressional mandate explicitly values the study of diverse histories and cultures; civil rights historians note that identity-specific inquiry is a core part of documenting the American experience.

First Amendment and public-administration experts have warned that criteria singling out identity topics can raise constitutional questions in government grantmaking. While agencies have discretion to set priorities, legal analysts at organizations like the Brennan Center for Justice say viewpoint-based exclusions are especially fraught when applied to academic and cultural work.

A close-up of a message bar with Message ChatGPT typed in, and a cursor hovering over a Search button with a globe icon. The image has been resized to a 16:9 aspect ratio.

Experience Gaps and Internal Culture at the Agency

Both Fox and Cavanaugh said they had no prior government experience when they joined DOGE. Viral clips show Fox struggling to define DEI even as he was tasked with flagging DEI-related programs for defunding. The depositions depict a workplace focused on rapid cost-cutting with limited policy vetting and a casual lexicon for sorting sensitive cultural material.

Compensation records mentioned in testimony place Fox’s salary at $150,000 and Cavanaugh’s at $120,000. The numbers are consistent with mid- to senior-level federal pay bands, though former officials in inspector general offices say specialized training is typically expected for personnel evaluating program eligibility and constitutional risk.

Budget Targets, Reductions Identified, and Fallout

Internally, DOGE teams were told to help cut trillions from the federal deficit, according to the depositions. The employees said the agency ultimately identified hundreds of billions in reductions across various lines of spending. Yet the national deficit still rose, a trend economists attribute largely to mandatory programs, interest costs, and tax-collection dynamics that lie beyond the reach of discretionary cultural grants.

DOGE was later disbanded amid mounting controversy. Separately, oversight bodies have opened investigations into DOGE practices and personnel. In one case referenced by officials, a DOGE employee allegedly removed Social Security records onto a USB drive before leaving for a private-sector role, underscoring wider concerns about data handling and records compliance.

What to Watch Next in the DOGE Lawsuit and Policy

The lawsuit brought by leading humanities organizations now doubles as a test of how far an executive initiative can go in redefining cultural spending with AI-assisted triage. Plaintiffs seek to restore eliminated NEH programs and to curtail what they describe as unlawful, ideologically driven decision-making. Legal scholars expect courts to scrutinize whether DOGE’s criteria were neutral and consistently applied.

For agencies across government, the episode is already a cautionary tale. Auditors at the Government Accountability Office and inspectors general have urged clear guardrails for using generative AI in public decisions, from transparency and audit logs to bias testing and human-in-the-loop review. The depositions from former DOGE employees show what happens when those safeguards are thin—and when the definition of “efficiency” eclipses expertise.

Bill Thompson
ByBill Thompson
Bill Thompson is a veteran technology columnist and digital culture analyst with decades of experience reporting on the intersection of media, society, and the internet. His commentary has been featured across major publications and global broadcasters. Known for exploring the social impact of digital transformation, Bill writes with a focus on ethics, innovation, and the future of information.
Latest News
How Faceless Video Is Transforming Digital Storytelling
Oracle Cloud ERP Outage Sparks Renewed Debate Over Vendor Lock-In Risks
Why Digital Privacy Has Become a Mainstream Concern for Everyday Users
The Business Case For A Single API Connection In Digital Entertainment
Why Skins and Custom Servers Make Minecraft Bedrock Feel More Alive
Why Server Quality Matters More Than You Think in Minecraft
Smart Protection for Modern Vehicles: A Guide to Extended Warranty Coverage
Making Divorce Easier with the Right Legal Support
What to Know Before Buying New Glasses
8 Key Features to Look for in a Modern Payroll Platform
How to Refinance a Motorcycle Loan
GDC 2026: AviaGames Driving Innovation in Skill-Based Mobile Gaming
FindArticles
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
  • Corrections Policy
  • Diversity & Inclusion Statement
  • Diversity in Our Team
  • Editorial Guidelines
  • Feedback & Editorial Contact Policy
FindArticles © 2025. All Rights Reserved.