Anthropic chief executive Dario Amodei has sharply criticized OpenAI’s public framing of its new U.S. defense contract, telling employees that OpenAI’s messaging amounts to “straight up lies,” according to a staff memo reported by The Information. The rift spotlights a growing divide in how leading AI labs engage with the Pentagon and what, exactly, counts as meaningful safety guardrails.
Why Anthropic Walked Away From the Pentagon Deal
Anthropic had been negotiating with the Department of Defense and already held a substantial federal contract, but talks unraveled over access and use restrictions, people familiar with the matter said. The company pushed for explicit prohibitions against using its models for domestic mass surveillance or to power autonomous weapons, provisions it viewed as baseline commitments rather than stretch goals.

When the DoD pressed to retain “any lawful use” access—language common in federal procurement—Anthropic refused to proceed. In the memo cited by The Information, Amodei argued that accepting vague limits would turn safety into performance art rather than enforceable practice, a stance he has previously summarized as rejecting “safety theater.”
OpenAI’s Contract And The ‘Lawful Use’ Dispute
OpenAI, by contrast, reached an agreement and later said its systems could be used for “all lawful purposes,” while claiming the deal explicitly carves out activities such as mass domestic surveillance. In a company blog post, OpenAI asserted that the government affirmed such surveillance would be illegal and was not contemplated under the contract.
That reassurance did little to satisfy Anthropic. Its core complaint is not about current intent but future drift: law evolves, and what is illegal today could be reinterpreted or authorized tomorrow. Civil liberties groups have made similar points for years, citing shifting boundaries around surveillance authorities and emergency powers. In defense procurement, “lawful” can be a moving target unless paired with narrow definitions, auditable controls, and penalties for misuse.
The contrast amounts to a precedent-setting question: Will frontier AI contracts hinge on broad legality standards, or on firm, contractually binding red lines tied to concrete technical and operational safeguards?
Public Sentiment and Market Signals After DoD Deal
Early indicators suggest the debate is resonating beyond Washington. Third-party app intelligence firms observed a 295% surge in ChatGPT uninstalls after the DoD deal became public. In his memo, Amodei told staff that Anthropic’s app climbed near the top of the iOS charts, claiming a No. 2 ranking, and argued that the broader public sees Anthropic’s stance as the more trustworthy one.

Enterprise buyers are also paying attention. Legal, compliance, and security teams increasingly ask vendors to codify use restrictions, map model capabilities to risk frameworks, and provide audit hooks. Where OpenAI points to legal boundaries, Anthropic is pressing for contractual clauses that survive policy shifts and administration changes—an approach more aligned with controls in the NIST AI Risk Management Framework and widely adopted model cards and system cards.
Defense AI’s Rapidly Shifting Ground and Programs
The Pentagon has published Responsible AI Tenets and implementation guidance, and it operates within autonomy policies such as DoD Directive 3000.09, which governs weapon system development. Yet the department is simultaneously accelerating programs like Replicator to field large numbers of autonomous and attritable systems. That push, along with hundreds of ongoing AI projects across the services, is drawing software-first players deeper into the national security ecosystem.
The last time Silicon Valley’s values collided this directly with defense work—during the Project Maven controversy—employee backlash at a major tech firm scuttled a high-profile AI imaging contract and reshaped recruiting for years. Today’s fight is more nuanced: both Anthropic and OpenAI say they want safety, but they disagree on whether legal standards alone are sufficient or whether bright-line bans are the only credible assurances.
What to Watch Next in Defense AI Contract Debate
Key open questions now include whether OpenAI will publish contract language or third-party attestations that clarify the limits it describes, and whether other labs adopt Anthropic’s harder lines on surveillance and weaponization. Watch for follow-on guidance from the Defense Department’s Chief Digital and Artificial Intelligence Office and any updates to acquisition templates that define “lawful” in practice.
Regardless of who wins this round of messaging, the outcome will influence standard-setting across the industry. If “all lawful purposes” becomes the default, expect firms to invest more in compliance narratives. If explicit prohibitions take hold, defense contracts for frontier models will likely include tighter model access controls, red-teaming on dual-use risks, and enforceable remedies for violations. For now, the sharpest line in the sand is the one Amodei just drew.
