Dozens of current employees at OpenAI and Google DeepMind have filed a court statement siding with Anthropic in its legal fight against the U.S. Department of Defense, a rare cross-company show of solidarity in the fiercely competitive AI sector. The brief supports Anthropic’s challenge to a Pentagon decision that labeled the company a supply-chain risk, a designation usually aimed at foreign adversaries rather than domestic vendors.
The filing, which includes Google DeepMind chief scientist Jeff Dean among its signatories, argues the government overreached by punishing a contractor for refusing certain uses of its technology. Anthropic has drawn red lines on enabling mass surveillance of Americans and autonomous weapon targeting or firing, positions long associated with its safety-first philosophy.
The statement landed shortly after Anthropic filed two lawsuits seeking to reverse the designation, a development first reported by Wired. Within hours of the Pentagon’s move, the agency inked a separate deal with OpenAI, a decision that triggered internal dissent from many OpenAI staff who then chose to back Anthropic in court.
Why Employees Are Intervening In Anthropic’s DoD Case
At the heart of the brief is a simple argument: when public law is unsettled, contractual guardrails and technical safeguards are the first line of defense against misuse. Anthropic’s posture, the employees contend, reflects widely debated norms in AI safety, not defiance. The company has publicly committed to “constitutional AI” and a responsible-scaling policy designed to fence off sensitive capabilities, and the signatories warn that punishing such limits could chill responsible practices across the industry.
The group also points to process. If the Pentagon no longer agreed with Anthropic’s use policies, it could have exercised familiar contracting tools and moved on. Under the Federal Acquisition Regulation, agencies routinely terminate for convenience and recompete work. Resorting to a supply-chain risk label, they argue, goes far beyond routine procurement discretion and sends a destabilizing signal to researchers and vendors.
An Unusual Use Of A National Security Tool
Supply-chain risk designations within the federal government typically target hardware or software linked to hostile states or prohibited telecom gear, often informed by frameworks under the Federal Acquisition Security Council and restrictions like Section 889 of the 2019 defense authorization law. Applying a similar label to a U.S.-based AI lab with mainstream investors and government contracts is, by most procurement standards, atypical.
Policy context makes the move even more striking. The Pentagon has publicly committed to Responsible AI principles and maintains long-standing rules for autonomy in weapons systems under DoD Directive 3000.09. It also stood up Task Force Lima to accelerate safe adoption of generative AI. Critics say the designation risks cutting against those commitments by disincentivizing vendors that proactively set limits aligned with responsible-use rhetoric.
Contracting Pressure Meets Safety Red Lines
Defense and intelligence demand for generative AI is surging, from analytic triage to logistics planning. Analysts have estimated that annual DoD contract obligations for AI and machine learning now total in the low billions of dollars, with year-over-year growth in both pilots and production awards. That momentum intensifies pressure on labs to accommodate a wide range of missions under “lawful purpose” clauses.
Anthropic’s boundaries—particularly around surveillance and autonomous weapons—collide with some of those ambitions. The company and its supporters argue that clear limits are a feature, not a bug, given the absence of sector-specific statutes governing generative AI. NIST’s AI Risk Management Framework and the White House’s AI executive order offer guidance, but neither is a direct substitute for law. Until Congress acts, the brief suggests, vendor policies may be the most concrete restraint available.
Echoes Of Past Tech Worker Revolts Resurface In AI
The intervention recalls earlier flashpoints, including Google’s withdrawal from Project Maven in 2018 following employee protests and internal pushback at Microsoft over an Army HoloLens contract. What’s different now is the cross-lab nature of the dissent: staff from rival AI houses are publicly backing a competitor on principle, signaling that a baseline of safety norms is hardening across the field.
That alignment has competitive stakes. The employees warn that sanctioning a top U.S. lab over use restrictions could push research talent toward non-defense markets or overseas projects, weakening American leadership. Stanford’s AI policy analyses have repeatedly flagged talent concentration as a critical advantage for the United States, making any move that erodes researcher trust a strategic risk.
What To Watch Next As The Anthropic-Pentagon Dispute Unfolds
Key questions now hinge on the courts. Will judges view the designation as an abuse of discretion or a legitimate exercise of national security authority? Procurement attorneys note that a ruling curbing the Pentagon here could reset boundaries for how agencies use supply-chain risk tools in software and AI.
Equally consequential is whether other AI labs clarify or harden their own red lines in response. If more vendors codify prohibitions on surveillance or autonomous targeting, the Pentagon may face a new normal: accommodating responsible-use limits through tailored contracts rather than coercive designations. That outcome could align practice with policy—and keep the U.S. AI ecosystem focused on innovation without sacrificing core safety principles.