Anthropic’s Claude remains active inside U.S. military workflows even as a growing list of defense-tech customers moves to replace it, a split-screen reality created by overlapping federal directives and the pace of wartime operations.
Why The Pentagon Still Retains Access To Claude Systems
Conflicting guidance from Washington put Claude in a legal gray zone. A White House directive told civilian agencies to step away from Anthropic tools, while allowing a months-long wind-down for the Department of Defense. Hostilities between the U.S., its allies, and Iran escalated before that off-ramp elapsed, leaving the Pentagon free to keep using the model in mission support.

Defense Secretary Pete Hegseth has said he intends to flag Anthropic as a supply-chain risk, a move that could bar procurement across many federal programs. But no formal designation has landed. Without that, program managers face no hard prohibition, and operational commanders are unlikely to abandon a working system while sorties are ongoing.
Inside The Targeting Workflow For Real-Time Strike Planning
Reporting by The Washington Post detailed how Claude is paired with Palantir’s Maven platform to accelerate strike planning. Officials used the combined systems to surface target sets, generate coordinates, and prioritize by mission relevance, with the newspaper describing “real-time targeting and target prioritization.” In practice, that means Claude acts as a fast filter and recommender layered under human oversight and established rules of engagement.
The rapid adoption fits a broader Pentagon pattern: AI is pushed to the edge of the kill chain for triage, fusion, and deconfliction while keeping a human in the loop for lethal decisions. The payoff is speed—shortening the time from sensor to shooter—alongside better consistency when analysts are processing massive volumes of imagery and signals in compressed timelines.
Why Prime Contractors Are Pivoting To Alternative AI Models
Even as Claude persists in combat support, its commercial standing in the defense base is eroding. Reuters reported that prime contractors, including Lockheed Martin, began swapping out Anthropic models for competitors. That ripple is moving through the subcontractor tier as well. A managing partner at J2 Ventures told CNBC that 10 of the firm’s portfolio companies have stepped back from Claude for defense use cases and are actively migrating.
The calculus is straightforward. Program teams are weighing operational gains against regulatory uncertainty, potential procurement bans, and the cost of re-authorization. Switching now reduces lock-in risk and avoids mid-contract disruption if a supply-chain risk designation arrives. It also lets integrators align with customer preferences as agencies standardize on vetted model catalogs.

Vendors report they are testing a mix of alternatives—rival proprietary models, open-weight systems they can harden for specific classification levels, and ensemble approaches that route tasks based on sensitivity and performance. For sensitive missions, many are emphasizing reproducibility, strict data handling agreements, and auditable model behavior over raw benchmark scores.
What A Supply Chain Risk Label Would Trigger
If the Pentagon formally tags Anthropic as a supply-chain risk, it could cascade through contracts and task orders, effectively excluding Claude from new awards and many active vehicles. Similar actions in the past—such as federal bans on Kaspersky products—prompted rapid vendor removals, bid protests, and legal challenges. A designation here would likely ignite a contentious process before the Government Accountability Office or the U.S. Court of Federal Claims.
Short of a ban, agencies can still narrow the aperture by tightening model accreditation, mandating government-only hosting environments, or restricting use to non-operational workloads. Any of those steps would further steer integrators toward models with clearer compliance postures for defense missions.
The Strategic Picture For Defense AI Adoption And Resilience
The divergence highlights a maturing AI stack in national security. In the near term, commanders will use the tools they have, especially when they demonstrably compress decision cycles. In parallel, the defense industrial base is already normalizing model interchangeability: building abstraction layers, honing evaluation pipelines for mission-specific accuracy, and planning for rapid swaps when policy winds shift.
The open question is timing. If a formal restriction arrives, expect an accelerated drawdown and wider adoption of model portfolios tailored to defense requirements. If it does not, Claude may quietly persist in operational niches. Either way, the market signal is clear: for defense AI, resilience now means being ready to pivot—fast.
