Anthropic CEO Dario Amodei sought to counter attacks from high-profile Trump administration officials and industry allies who accuse the company of stoking AI “fear-mongering.” And, in a lengthy statement, Amodei defended the approach of Anthropic, which mediates useful products and candid risk discussion in pragmatic collaboration with policymakers, no matter their party.
Amodei Contradicts Regulatory Capture Insinuations
The feud intensified when posts by allies of the White House — including venture capitalist David Sacks and policy adviser Sriram Krishnan — claimed Anthropic was trying to ratchet up alarm to maneuver rules in its favor. The criticism came after co-founder Jack Clark said powerful AI systems can act in manners that continue to be difficult to forecast — an opinion that is broadly echoed by safety researchers.
Amodei brushed off the narrative that Anthropic is an entity attempting to hamstring rivals, saying that, because of how consequential the technology is, the firm does openly discuss potential benefits and threats. He cast the debate as a false choice between “innovation and safety” and said that responsible guardrails were necessary to maintain long-term United States leadership.
Policy Over Politics and Work With Washington
Reaching to shore up relations with the administration and its key priorities, Amodei underscored Anthropic’s federal entanglements: the company makes its Claude models available to U.S. agencies and, according to published reports, has inked a $200 million deal with the Department of Defense.
He also cited public backing for the administration’s AI Action Plan and expanding domestic electricity capacity to power AI infrastructure — an area where experts in industry say grid resiliency and availability of electricity now represent strategic choke points.
Amodei highlighted that the company’s aim was to ensure American hands will keep control of critical AI systems, even as the technology materially benefits society. That framing dovetails with more general federal work such as NIST’s AI Risk Management Framework, which agencies and contractors increasingly cite when buying or assessing AI systems.
State Authority and the SB 53 Flashpoint
There was a schism over whether the states should be barred from regulating AI for 10 years. Anthropic had resisted such a prohibition, and some Silicon Valley leaders have pushed back on any rule, fearing a patchwork of rules. The company supported California’s SB 53, which mandates transparency on frontier model safety protocols while exempting firms with less than $500 million in annual revenue — a carve-out that was designed to protect most startups.
Advocates of the bill say that it merely lodges some common-sense safety practices outlined by federal and international systems. Critics worry that any state-level process risks scope creep. The tension between state bills and SAPs also reflects broader developments in the U.S. More than a dozen states have pursued AI bills focused on high-risk uses such as employment screening, education, and public-sector procurement, while Congress mulls national standards.
China, Chips and Security Posture in AI Competition
Amodei also dismissed the arguments that being cautious cedes ground to competitors abroad. He countered that the more immediate danger is piping advanced compute to strategic rivals, adding that Anthropic limits its services to entities under Chinese control despite potential revenues. That fits with U.S. export controls imposed by the Commerce Department that restrict shipments of high-end AI chips to China and other less-friendly jurisdictions, an area for policy watchers, such as think tanks like the Center for Security and Emerging Technology.
The real debate is not whether national security matters — few in the sector would disagree with that — but the tools for tamping down risk. Safety advocates focus on incident taxonomies, model evaluations, and red-teaming guidance from entities such as NIST and the OECD as pragmatic measures that do not stymie innovation but also raise floors for responsible deployment.
Safety Versus Speed Divides the Industry and Investors
Anthropic’s position has riled some of the growth-at-all-costs voices, including prominent operators arguing new rules will mean slower founders and enriched incumbents. Others in policy circles respond that light-touch, risk-based rules — similar to pledges discussed at the UK’s AI Safety Summit and the G7’s Hiroshima Process — offer clearer incentives for best practice without prescribing a particular design.
Amodei argued the company’s business performance belies the idea that safety-first is anti-growth — pointing out more than a fivefold climb from about a $1 billion to a $7 billion run rate in recent months as Claude adoption booms. Still, startups are core customers at the end of the day, with tens of thousands using Anthropic’s tools via accelerators and venture portfolios.
What to Watch Next in U.S. AI Policy, Chips and Energy
The immediate policy battlefields are evident: whether federal regulations will overrule (or harmonize with) those of states, how risks will be measured and reported, and how the United States navigates energy expansion as it faces climate change and infrastructure limitations. Look for ongoing White House pressure to ramp up domestic compute and secure chip supply chains, as well as state experimentation around high-risk AI oversight.
Amodei concluded by indicating that he would continue to participate publicly and — when needed — fend off positions that conflict with parts of the tech ecosystem. With safety standards maturing and procurement requirements radiating through government and corporate enterprise, the debate may soon shift from whether to establish guardrails to which ones matter most and how to ensure they work.