Hundreds of technologists and investors are pressing the Department of Defense and Congress to reverse a move labeling Anthropic a “supply chain risk,” warning it weaponizes procurement authority against a domestic AI firm and chills collaboration across the sector. In an open letter, signatories from companies and funds including OpenAI, Slack, IBM, Cursor, and Salesforce Ventures argue the designation is punitive retaliation, not a security judgment, and urge lawmakers to scrutinize the use of such powers.
Open Letter Challenges Extraordinary Powers
The letter follows a standoff in which Anthropic declined to grant the Pentagon unrestricted access to its models. The AI lab drew firm boundaries: its systems should not be used for mass surveillance of Americans or to enable autonomous weapons that can select and fire on targets without a human in the loop. Defense officials said those uses were not planned but resisted being constrained by vendor-imposed conditions.
After Anthropic CEO Dario Amodei held the line, President Donald Trump directed federal agencies to wind down use of Anthropic technology over a six-month transition. Defense Secretary Pete Hegseth then said he would label the company a supply chain risk, a step generally reserved for vendors tied to foreign adversaries. Anthropic responded that such a designation would be legally unsound and vowed to contest it in court.
Signatories say the episode sets a precedent that punishes a contractor for refusing last-minute changes to terms. They are asking Congress to examine whether supply chain risk authorities are being stretched beyond their intent and to reinforce guardrails that separate national security assessments from commercial leverage.
What a Supply Chain Risk Label Actually Requires
Despite rhetoric on social media, a federal contractor cannot be blacklisted overnight. Under long-standing “Section 806” authorities, now codified in Title 10, the Pentagon must conduct a risk assessment, make a determination at a high level, and notify Congress before excluding a source on supply chain grounds. Interagency processes established under the Federal Acquisition Supply Chain Security Act similarly require evidence and due process before governmentwide exclusion orders take effect.
Historically, such actions have targeted foreign-linked entities—think Huawei in telecom infrastructure, Kaspersky Lab in endpoint security, or DJI for sensitive missions—based on documented counterintelligence and cybersecurity concerns. Applying a similar label to a U.S.-based AI model provider over contractual red lines is unusual, procurement attorneys note, and could spur court challenges on administrative law and statutory authority.
The stakes are substantial. Federal agencies are rapidly incorporating AI, and the Government Accountability Office has cataloged hundreds of AI use cases across the civilian and defense enterprise. NIST’s AI Risk Management Framework and the GAO’s AI accountability guide emphasize transparent risk controls and human oversight—principles at the heart of Anthropic’s position.
AI Red Lines and Keeping a Human in the Loop
Anthropic’s insistence on human-in-the-loop constraints aligns with Department of Defense policy. DoD Directive 3000.09 on autonomy in weapon systems requires appropriate human judgment in the use of force. While interpretations vary, civil society groups and many AI researchers argue that vendors should codify these boundaries in access terms and technical controls.
Boaz Barak, a well-known AI researcher, echoed that view publicly, calling mass surveillance a “personal red line” and urging the field to treat state abuse risks with the same rigor applied to biosecurity and cybersecurity. In a parallel development, OpenAI announced its models will be available within DoD classified environments while stating it shares the same red lines on domestic mass surveillance and fully autonomous weapons.
Retaliation Concerns and the Market Signals Ahead
Industry leaders worry that branding a domestic AI supplier as a supply chain risk over a terms dispute will deter startups from engaging with defense at all. Venture investors say it injects political risk into contracts already burdened by long sales cycles and stringent compliance, complicating decisions about whether to build for national security customers.
There is also a transparency concern. If the designation proceeds without a robust, published record, it could erode confidence in the SCRM tools Congress created to counter genuine espionage and sabotage threats. Procurement experts warn that overuse or misuse of these authorities can invite legal defeats, undermine deterrence, and fragment the vendor base just as agencies seek cutting-edge AI capabilities.
What to Watch Next as the Supply Chain Risk Fight Unfolds
The next moves will likely unfold on parallel tracks: an internal DoD risk assessment process subject to congressional notification, potential litigation from Anthropic challenging any exclusion, and behind-the-scenes efforts by lawmakers to seek briefings or rein in the designation if it appears unsubstantiated. For AI providers and federal buyers alike, the outcome will signal whether principled red lines can coexist with national security procurement, or whether the price of access is limitless flexibility.