FindArticles FindArticles
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
FindArticlesFindArticles
Font ResizerAa
Search
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
Follow US
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
FindArticles © 2025. All Rights Reserved.
FindArticles > News > Technology

Anthropic CEO Calls OpenAI Military Deal Messaging Lies

Gregory Zuckerman
Last updated: March 5, 2026 12:01 am
By Gregory Zuckerman
Technology
6 Min Read
SHARE

Anthropic chief executive Dario Amodei has sharply criticized OpenAI’s public framing of its new U.S. defense contract, telling employees that OpenAI’s messaging amounts to “straight up lies,” according to a staff memo reported by The Information. The rift spotlights a growing divide in how leading AI labs engage with the Pentagon and what, exactly, counts as meaningful safety guardrails.

Why Anthropic Walked Away From the Pentagon Deal

Anthropic had been negotiating with the Department of Defense and already held a substantial federal contract, but talks unraveled over access and use restrictions, people familiar with the matter said. The company pushed for explicit prohibitions against using its models for domestic mass surveillance or to power autonomous weapons, provisions it viewed as baseline commitments rather than stretch goals.

Table of Contents
  • Why Anthropic Walked Away From the Pentagon Deal
  • OpenAI’s Contract And The ‘Lawful Use’ Dispute
  • Public Sentiment and Market Signals After DoD Deal
  • Defense AI’s Rapidly Shifting Ground and Programs
  • What to Watch Next in Defense AI Contract Debate
A man with curly dark hair and glasses, wearing a white shirt and a dark blue blazer, speaking with his hands raised in front of a red and white background.

When the DoD pressed to retain “any lawful use” access—language common in federal procurement—Anthropic refused to proceed. In the memo cited by The Information, Amodei argued that accepting vague limits would turn safety into performance art rather than enforceable practice, a stance he has previously summarized as rejecting “safety theater.”

OpenAI’s Contract And The ‘Lawful Use’ Dispute

OpenAI, by contrast, reached an agreement and later said its systems could be used for “all lawful purposes,” while claiming the deal explicitly carves out activities such as mass domestic surveillance. In a company blog post, OpenAI asserted that the government affirmed such surveillance would be illegal and was not contemplated under the contract.

That reassurance did little to satisfy Anthropic. Its core complaint is not about current intent but future drift: law evolves, and what is illegal today could be reinterpreted or authorized tomorrow. Civil liberties groups have made similar points for years, citing shifting boundaries around surveillance authorities and emergency powers. In defense procurement, “lawful” can be a moving target unless paired with narrow definitions, auditable controls, and penalties for misuse.

The contrast amounts to a precedent-setting question: Will frontier AI contracts hinge on broad legality standards, or on firm, contractually binding red lines tied to concrete technical and operational safeguards?

Public Sentiment and Market Signals After DoD Deal

Early indicators suggest the debate is resonating beyond Washington. Third-party app intelligence firms observed a 295% surge in ChatGPT uninstalls after the DoD deal became public. In his memo, Amodei told staff that Anthropic’s app climbed near the top of the iOS charts, claiming a No. 2 ranking, and argued that the broader public sees Anthropic’s stance as the more trustworthy one.

The OpenAI logo and name are displayed on a screen, with a robotic hand reaching towards it.

Enterprise buyers are also paying attention. Legal, compliance, and security teams increasingly ask vendors to codify use restrictions, map model capabilities to risk frameworks, and provide audit hooks. Where OpenAI points to legal boundaries, Anthropic is pressing for contractual clauses that survive policy shifts and administration changes—an approach more aligned with controls in the NIST AI Risk Management Framework and widely adopted model cards and system cards.

Defense AI’s Rapidly Shifting Ground and Programs

The Pentagon has published Responsible AI Tenets and implementation guidance, and it operates within autonomy policies such as DoD Directive 3000.09, which governs weapon system development. Yet the department is simultaneously accelerating programs like Replicator to field large numbers of autonomous and attritable systems. That push, along with hundreds of ongoing AI projects across the services, is drawing software-first players deeper into the national security ecosystem.

The last time Silicon Valley’s values collided this directly with defense work—during the Project Maven controversy—employee backlash at a major tech firm scuttled a high-profile AI imaging contract and reshaped recruiting for years. Today’s fight is more nuanced: both Anthropic and OpenAI say they want safety, but they disagree on whether legal standards alone are sufficient or whether bright-line bans are the only credible assurances.

What to Watch Next in Defense AI Contract Debate

Key open questions now include whether OpenAI will publish contract language or third-party attestations that clarify the limits it describes, and whether other labs adopt Anthropic’s harder lines on surveillance and weaponization. Watch for follow-on guidance from the Defense Department’s Chief Digital and Artificial Intelligence Office and any updates to acquisition templates that define “lawful” in practice.

Regardless of who wins this round of messaging, the outcome will influence standard-setting across the industry. If “all lawful purposes” becomes the default, expect firms to invest more in compliance narratives. If explicit prohibitions take hold, defense contracts for frontier models will likely include tighter model access controls, red-teaming on dual-use risks, and enforceable remedies for violations. For now, the sharpest line in the sand is the one Amodei just drew.

Gregory Zuckerman
ByGregory Zuckerman
Gregory Zuckerman is a veteran investigative journalist and financial writer with decades of experience covering global markets, investment strategies, and the business personalities shaping them. His writing blends deep reporting with narrative storytelling to uncover the hidden forces behind financial trends and innovations. Over the years, Gregory’s work has earned industry recognition for bringing clarity to complex financial topics, and he continues to focus on long-form journalism that explores hedge funds, private equity, and high-stakes investing.
Latest News
How Faceless Video Is Transforming Digital Storytelling
Oracle Cloud ERP Outage Sparks Renewed Debate Over Vendor Lock-In Risks
Why Digital Privacy Has Become a Mainstream Concern for Everyday Users
The Business Case For A Single API Connection In Digital Entertainment
Why Skins and Custom Servers Make Minecraft Bedrock Feel More Alive
Why Server Quality Matters More Than You Think in Minecraft
Smart Protection for Modern Vehicles: A Guide to Extended Warranty Coverage
Making Divorce Easier with the Right Legal Support
What to Know Before Buying New Glasses
8 Key Features to Look for in a Modern Payroll Platform
How to Refinance a Motorcycle Loan
GDC 2026: AviaGames Driving Innovation in Skill-Based Mobile Gaming
FindArticles
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
  • Corrections Policy
  • Diversity & Inclusion Statement
  • Diversity in Our Team
  • Editorial Guidelines
  • Feedback & Editorial Contact Policy
FindArticles © 2025. All Rights Reserved.