A new court filing has surfaced a striking contradiction at the center of the government’s rift with Anthropic. According to sworn testimony attached to the company’s reply brief, a senior Pentagon official told Anthropic leadership that the two sides were “very close” on the very policies now cited as national security red flags—just days after President Trump publicly declared the relationship over. The disclosure punches a hole in the narrative of an unbridgeable policy gap and could reshape a fast-moving fight over who sets the guardrails for military AI.
What the Court Filing Reveals About Pentagon-Anthropic Alignment
Anthropic submitted declarations from its Head of Policy, Sarah Heck, and Head of Public Sector, Thiyagu Ramasamy. Heck describes an email from the Pentagon’s Under Secretary Emil Michael telling CEO Dario Amodei the parties were “very close” on two focal issues: the company’s limits on autonomous weapons and its stance against mass surveillance of Americans. That internal message landed shortly after the Defense Department finalized a supply‑chain risk designation against the company, and before officials began publicly describing talks as dormant or dead.
The timeline, as laid out in the filing, raises an uncomfortable question for the government: if the company’s positions on those two topics truly render it an unacceptable risk, why did a top defense official privately say alignment was within reach? Heck stops short of alleging leverage or retaliation, but the contemporaneous note is likely to loom large at the upcoming hearing in San Francisco before Judge Rita Lin.
Anthropic’s Rebuttal To National Security Claims
Heck disputes a centerpiece of the government’s argument—that Anthropic insisted on an approval role over military operations. “At no time” did Anthropic seek that authority, she states, adding that fears the company could disable its systems mid‑mission were never raised during months of negotiations and appeared for the first time in court filings. That assertion matters; in contracting, unvetted operational constraints can trigger risk flags, but raising them post‑hoc undercuts the claim that they posed an imminent threat.
Ramasamy, who previously managed sensitive government AI deployments at a major cloud provider, attacks the technical premise behind the alleged “operational veto.” Once Anthropic’s Claude models are deployed in government‑secured, air‑gapped environments run by accredited contractors, he says the company has no backdoor, no remote kill switch, and no path to push unauthorized updates. Any material change would require the Pentagon’s explicit action through standard change‑control and Authority to Operate processes familiar across defense IT.
He also notes that Anthropic personnel supporting classified environments have held U.S. government clearances, and that cleared staff contributed to model builds intended for those settings—an uncommon practice in the commercial AI sector. The filing underscores that Anthropic cannot see user prompts or outputs from government deployments, aiming to deflate surveillance and data‑exfiltration fears.
The Legal Stakes And A Novel Designation
At issue is a supply‑chain risk designation that restricts federal use of Anthropic’s technology. The company argues it is the first time such a designation has been applied to a U.S. AI vendor and that the move punishes its publicly stated safety principles, violating the First Amendment. The government counters that Anthropic’s refusal to permit all lawful military uses is a business choice, not protected speech, and that the designation stems from a straightforward national security assessment.
Legal experts note that courts have traditionally granted wide deference to the executive branch on national security and procurement risk. Yet deference is not immunity: if the record shows pretext or viewpoint discrimination, judges can and do intervene. The private‑public contradiction highlighted in this filing could become a hinge point for whether the court sees a bona fide risk call or an effort to strong‑arm policy concessions outside normal acquisition channels.
Why This Matters for Defense AI and Future Procurement Rules
Beyond one company, the case touches every AI supplier navigating the Pentagon’s evolving rules on autonomy and domestic data use. Congress, think tanks like RAND and CSET, and the Defense Innovation Board have all urged clearer standards around human oversight of AI-enabled weapons and firm prohibitions on indiscriminate surveillance. Procurement friction is already a leading cause of stalled pilots; Government Accountability Office reports have repeatedly warned that opaque risk rulings and inconsistent due process chill competition and delay fielding.
The Pentagon has signaled it wants rapid access to commercial models while maintaining control over mission-critical risk. Vendors, for their part, are erecting safety rails to prevent their systems from aiding unlawful targeting or dragnet monitoring. The newly surfaced email suggests those positions are not mutually exclusive—and that alignment may be a matter of codifying governance, not ideology.
What to Watch Next as the Court Weighs AI Risk and Policy
The court could order a limited injunction, compel a clearer administrative record, or nudge both parties back to the table to formalize deployment protocols around autonomy, change control, and data boundaries. However it lands, the outcome will set a precedent for how Washington arbitrates safety guardrails in frontline AI systems—and whether private emails or public statements carry more weight when billions in national security technology and trust are at stake.