Anthropic has filed suit against the Pentagon even as its flagship AI assistant, Claude, rockets up global app charts. The company says it is adding more than one million new users daily worldwide, a surge that coincides with a federal directive cutting off government use of its technology.
The legal fight and the adoption boom are moving in lockstep. While federal agencies unwind contracts, consumers appear to be rewarding Anthropic for drawing a hard line on how its AI should be used, turning a Washington dispute into a viral growth engine.
Anthropic Files Suit Over Retaliation Claims
In a complaint filed in the U.S. District Court for the Northern District of California, Anthropic alleges the federal government retaliated after the company refused to support applications it considers unsafe, including autonomous lethal targeting and mass surveillance of Americans. The filing argues that a sweeping order directing agencies to immediately stop using Anthropic’s systems violated the First Amendment, the Fifth Amendment’s due process protections, and the Administrative Procedure Act.
Anthropic also challenges the Pentagon’s designation of the company as a “supply chain risk,” noting that such labels have historically targeted foreign vendors seen as national security threats. The company is asking the court to invalidate the government’s actions and issue an injunction preventing enforcement while the case proceeds.
After the directive, the General Services Administration terminated Anthropic’s governmentwide contract, and agencies including the Treasury Department, Federal Housing Finance Agency, and State Department said they would cut ties. Anthropic maintains that the government acted without the notice-and-record process typical in federal procurement disputes, a point likely to feature prominently under the APA’s arbitrary-and-capricious standard.
Government Contracts Unravel Amid Pentagon Dispute
Before the breakdown, Claude-powered tools had become widely used inside the Defense Department, with access reportedly extending to certain classified systems. Tensions escalated when Defense Secretary Pete Hegseth pushed to expand AI across the military and sought fewer restrictions from vendors, prompting a round of renegotiations with major providers.
According to reporting cited by the company, talks were fraught, with the Pentagon’s chief technology officer, Emil Michael, clashing with Anthropic CEO Dario Amodei over safety and control. As negotiations stalled, the department explored a fallback with OpenAI, whose chief executive Sam Altman had been courting defense officials. Hours after Anthropic missed a Pentagon-imposed deadline, OpenAI announced a deal with the department, reshaping the competitive landscape overnight.
Claude Downloads Surge Despite Federal Usage Ban
The public response has moved in the opposite direction of Washington. Anthropic says it is onboarding more than one million new users a day, breaking internal signup records continuously since the dispute began. Mobile analytics firm AppFigures reports Claude is the No. 1 app on Apple’s App Store in 16 countries and has overtaken OpenAI’s ChatGPT and Google’s Gemini in more than 20 markets.
The spike mirrors a familiar “backfire” effect in tech: attempted restrictions can heighten visibility and galvanize supporters. Beyond the optics, Anthropic’s guardrails-first positioning may be resonating with users who want high capability without conceding on safety, especially in education, research, and professional settings where provenance and restraint matter.
Legal Stakes and Industry Fallout from the Case
Anthropic’s First Amendment claim frames the government’s actions as viewpoint retaliation for refusing military uses it deemed unsafe. Its Fifth Amendment theory centers on due process, arguing the company was blacklisted without a fair procedure. The APA challenge contends agencies acted without a reasoned basis or proper rulemaking, a high bar the government typically meets by showing a documented national security rationale.
The case will test how far federal officials can go in centralizing vendor exclusions across agencies, and whether supply chain risk tools—historically invoked for foreign hardware and telecom firms—can be extended to a domestic AI provider. Past moves against companies like Huawei and Kaspersky hinged on foreign influence concerns; applying similar machinery to a U.S. developer raises novel constitutional and procurement questions.
For the AI sector, the outcome could set the tone for how safety policies intersect with government demand. A court-ordered pause would preserve Anthropic’s public momentum and give enterprises cover to keep piloting Claude. If the government prevails, expect faster consolidation around vendors willing to align with defense priorities, and fresh scrutiny of how platform guardrails are negotiated in classified and sensitive environments.
Either way, the combination of courtroom drama and record-breaking downloads underscores a reality of the AI era: product adoption and policy legitimacy now move together, and each can rapidly reshape the other.