Anthropic is holding the line on its AI safety rules as the Pentagon ratchets up pressure, setting a short deadline for the company to open broader access to its model or face punitive measures. According to multiple reports, defense leaders have warned they could label the startup a supply chain risk or invoke the Defense Production Act to compel a military-tailored build.
The confrontation followed a meeting between Defense Secretary Pete Hegseth and Anthropic CEO Dario Amodei, as reported by Axios. Reuters has indicated the company does not plan to relax policies that bar mass surveillance and fully autonomous weapons. For now, neither side appears ready to compromise.
Pentagon raises the stakes with supply chain and DPA threats
Officials have floated two extraordinary levers. First, classifying Anthropic as a supply chain risk would give agencies cover to exclude the company from sensitive procurements, a tool more commonly used to keep foreign adversaries’ tech out of federal systems. Second, resorting to the Defense Production Act would be a groundbreaking play to prioritize, or even expand, AI model delivery for national defense uses.
The DPA is no relic. During the COVID-19 crisis it was used to accelerate production of ventilators and N95 masks, shifting industrial capacity in weeks. Applying it to AI guardrails, however, would mark a new frontier. The law’s traditional focus has been hardware, materials, and manufacturing throughput; using it to direct the behavior of a software model and its access controls would test both legal theory and agency practice.
Why Anthropic won’t bend on AI safety guardrails
Anthropic has staked its brand on strict usage policies, including prohibitions on mass domestic surveillance and end-to-end autonomy in kinetic targeting. Those commitments are embedded in product terms and reinforced by internal safety research. The company argues that meaningful guardrails are essential to prevent misuse as large models grow more capable.
Pentagon leaders counter that lawful military applications should be governed by statute and oversight, not by the private preferences of a contractor. That philosophical clash has become political too, with senior administration figures such as AI policy lead David Sacks publicly deriding Anthropic’s approach as overly ideological.
Complicating matters, several reports say Anthropic is the only frontier lab currently cleared for certain classified DOD environments. While the department has reportedly lined up xAI’s Grok for use in classified systems, that pathway is not yet a drop-in substitute for all mission needs. Limited redundancy strengthens the Pentagon’s hand rhetorically but narrows its practical options.
Can the Defense Production Act really compel AI access
Legally, the DPA’s Title I allows the government to prioritize contracts deemed essential to national defense, and Title III lets it invest to expand industrial capacity. The Congressional Research Service has noted the statute’s broad scope, but most modern deployments have centered on tangible goods, critical minerals, and manufacturing services, not the content policy of a commercial AI system.
Forcing a model to operate without certain guardrails, or to create a bespoke military variant, would raise novel questions. Companies could challenge directives under the Administrative Procedure Act, arguing the action is arbitrary or exceeds statutory authority. Some legal scholars also see a potential First Amendment angle if the government compels expressive outputs or model behavior that a firm rejects on ethical grounds. While the DPA includes compensation mechanisms, that does not eliminate constitutional scrutiny.
There is also a practicality test. Even with a DPA order, the Pentagon would need secure deployment paths, rigorous red-teaming, and assurance frameworks to avoid cascading risks from a hastily modified model. The National Institute of Standards and Technology’s AI Risk Management Framework and recent DOD directives on responsible AI would still apply, adding process friction.
National security and market fallout from AI clash
Declaring a leading domestic AI supplier a supply chain risk would be unprecedented and could ripple through procurement and venture markets. The Foundation for American Innovation’s Dean Ball has warned that threatening to sideline a firm over policy disagreements would chill investment and signal greater political risk in the U.S. tech ecosystem.
Allies are watching too. NATO partners are developing their own AI assurance regimes, and the EU’s AI Act is moving into implementation. A U.S. move to compel changes to a model’s safety posture could complicate cross-border compliance and push vendors to segment products by jurisdiction, increasing cost and slowing iteration.
There are middle paths. The Pentagon could pursue tiered access with hardened on-prem deployments, immutable audit logs, independent oversight boards, and mission-specific fine-tunes that preserve red lines on domestic surveillance and autonomous targeting. Those patterns mirror safety controls already used in other dual-use domains, from cryptography to satellite imaging.
What happens next in the Pentagon–Anthropic standoff
The immediate question is whether either side blinks before the deadline. If the department moves to blacklist Anthropic, expect rapid legal challenges and contingency sourcing. If it triggers the DPA, prepare for a test case that could define how far the federal government can go in directing the behavior of foundation models.
Either outcome will set precedent well beyond one lab. The stakes include not only near-term military capabilities but also the long-term balance between democratic oversight, private governance of AI risks, and the durability of the U.S. innovation climate.