Former staffers at the Elon Musk-led Department of Government Efficiency, known as DOGE, have offered a rare inside account of how the agency made sweeping funding decisions, describing an operation that leaned on ChatGPT to flag programs for cuts tied to diversity, equity, and inclusion. Their deposition testimony, taken as part of a federal lawsuit over DOGE’s moves against the National Endowment for the Humanities, outlines a process critics say was both hasty and ideologically skewed.
The case was brought by the American Council of Learned Societies, the American Historical Association, and the Modern Language Association, which argue DOGE’s actions unlawfully gutted humanities programs. Lengthy deposition videos of two former DOGE employees, Justin Fox and Nate Cavanaugh, have since circulated widely, offering granular detail on internal practices and prompting fresh questions about governance, expertise, and the role of AI in policy.
Depositions Detail AI’s Role in Program and Grant Cuts
According to testimony, DOGE staff routinely prompted ChatGPT with a decisive query: “Does the following relate at all to DEI?” The chatbot was instructed to respond with a simple yes or no and a brief explanation, capped at under 120 characters. The resulting labels were then used to help prioritize programs and grants for elimination, particularly within the humanities portfolio.
The employees described this as a way to “speed things up” across vast spreadsheets of awards and proposals. But the description underscores a deeper concern experts have raised about automating value-laden judgments. AI policy researchers have repeatedly cautioned that models can echo the biases in their prompts and training data, particularly when asked to render binary decisions on complex social or cultural content.
Searches Focused on Identity Terms in Grant Reviews
Fox testified that DOGE staff searched grant databases for terms like “Black,” “gender,” “LGBTQ+,” and “equality” to identify potential DEI links, but did not systematically search for terms such as “Caucasian” or “heterosexual.” In Cavanaugh’s account, internal labels like “craziest” were applied to dozens of LGBTQ+-related grants slated for review, a taxonomy likely to fuel claims of viewpoint-based targeting.
Fox further acknowledged that documentaries focused on Black civil rights or on Jewish women during the Holocaust could be marked for cuts on the grounds that they centered specific groups rather than “humankind” broadly. Humanities scholars counter that NEH’s congressional mandate explicitly values the study of diverse histories and cultures; civil rights historians note that identity-specific inquiry is a core part of documenting the American experience.
First Amendment and public-administration experts have warned that criteria singling out identity topics can raise constitutional questions in government grantmaking. While agencies have discretion to set priorities, legal analysts at organizations like the Brennan Center for Justice say viewpoint-based exclusions are especially fraught when applied to academic and cultural work.
Experience Gaps and Internal Culture at the Agency
Both Fox and Cavanaugh said they had no prior government experience when they joined DOGE. Viral clips show Fox struggling to define DEI even as he was tasked with flagging DEI-related programs for defunding. The depositions depict a workplace focused on rapid cost-cutting with limited policy vetting and a casual lexicon for sorting sensitive cultural material.
Compensation records mentioned in testimony place Fox’s salary at $150,000 and Cavanaugh’s at $120,000. The numbers are consistent with mid- to senior-level federal pay bands, though former officials in inspector general offices say specialized training is typically expected for personnel evaluating program eligibility and constitutional risk.
Budget Targets, Reductions Identified, and Fallout
Internally, DOGE teams were told to help cut trillions from the federal deficit, according to the depositions. The employees said the agency ultimately identified hundreds of billions in reductions across various lines of spending. Yet the national deficit still rose, a trend economists attribute largely to mandatory programs, interest costs, and tax-collection dynamics that lie beyond the reach of discretionary cultural grants.
DOGE was later disbanded amid mounting controversy. Separately, oversight bodies have opened investigations into DOGE practices and personnel. In one case referenced by officials, a DOGE employee allegedly removed Social Security records onto a USB drive before leaving for a private-sector role, underscoring wider concerns about data handling and records compliance.
What to Watch Next in the DOGE Lawsuit and Policy
The lawsuit brought by leading humanities organizations now doubles as a test of how far an executive initiative can go in redefining cultural spending with AI-assisted triage. Plaintiffs seek to restore eliminated NEH programs and to curtail what they describe as unlawful, ideologically driven decision-making. Legal scholars expect courts to scrutinize whether DOGE’s criteria were neutral and consistently applied.
For agencies across government, the episode is already a cautionary tale. Auditors at the Government Accountability Office and inspectors general have urged clear guardrails for using generative AI in public decisions, from transparency and audit logs to bias testing and human-in-the-loop review. The depositions from former DOGE employees show what happens when those safeguards are thin—and when the definition of “efficiency” eclipses expertise.