Anthropic has backed SB 53, a proposal in California that would require frontier AI developers to adhere to clear safety practises and publish security reports before they release powerful models. The endorsement is a significant departure from the resistance of many in the tech industry and suggests that at least one leading lab is prepared to embrace enforceable transparency rules rather than voluntarism.
What SB 53 does, in fact
SB 53 would address top-end capabilities—those of companies like Anthropic, OpenAI, Google, and xAI—by requiring documented risk management plans and public pre-deployment safety and security disclosures. It also creates whistleblowing protections for employees who report safety concerns, an effort to bring issues to the surface before rather than after deployment.

The bill focuses on “catastrophic risk,” setting that threshold at a death toll of at least 50 people or damage costing more than $1 billion. In practice, that would translate to preventing sophisticated models from offloading expert-grade help in areas such as biological weaponization or high-impact cyberattacks, and not targeting just consumer harms like deepfakes or bias on their own.
Lawmakers limited the scope of the law to the biggest players by making coverage dependent on scale, with a gross-revenue test designed to exempt startups. The recent changes also struck a requirement for a third-party audit that industry groups had found burdensome, as they tried to balance safety with what is practicable.
Why Anthropic’s backing is pivotal
Anthropic has long contended that the best AI rules should be federal and risk-based, and that it is echoing suggestions from the National Institute of Standards and Technology’s AI Risk Management Framework. Its support of a state bill also highlights a pragmatic shift: model behavior is advancing beyond national consensus. Jack Clark, a co-founder, said the industry cannot wait for a unified federal regime to put guardrails in place.
Many frontier labs currently publish various safety materials — model cards, red-team summaries, responsible scaling plans. The only difference with SB 53 is enforceability: instead of public reporting being a best-effort blog post, it’s a legal requirement with penalties. To policymakers fearing that safety promises will slacken as competition intensifies, that’s a material change.
The pushback and the constitutional minefield
Trade groups such as the Consumer Technology Association and Chamber of Progress have fought SB 53, citing a potential patchwork of state rules and compliance costs that could impede innovation. Prominent investors have sounded similar alarms, and policy leaders at Andreessen Horowitz have said broad state AI mandates could violate the Constitution’s Commerce Clause if they were to have the effects of regulating activity across state lines.

California’s governor had previously vetoed an even broader AI safety bill, SB 1047. SB 53 is narrower by design, aimed at transparency and extreme-risk mitigation, and ditches the prior third-party audit requirement. That trimming has earned tentative praise from some skeptics. Dean Ball, of the Foundation for American Innovation and critic of SB 1047, called the new tack an improvement from an understanding of technical facts as well as legislative restraint.
The bill also incorporates the work of an expert panel that was assembled by the governor and co-chaired by the Stanford professor Fei-Fei Li, a sign that the state is drawing on academic and industry expertise, rather than drawing things up in a vacuum.
How it fits with the changing safety playbook
SB 53 is not a powerhouse effort to rewrite federal programs, but rather it complements them. The White House’s AI executive actions already call for reporting on large-scale model testing under national security authorities, and NIST offers voluntary risk guidance. California’s measure would codify that logic with state-level transparency triggers and whistleblower protections, laying down a marker for the world’s biggest developers working in the largest tech market.
Reallife research has shown why extreme-risk protections are important. Red-teams from government and the private sector have demonstrated that advanced models can, with no constraints, enhance the capabilities of a novice in a human intrusion workflow, or provide step-by-step guidance that encroaches on sensitive biological areas. Labs have answered with content filtering, fine-tuning and system-level restrictions — and standards differ. SB 53 wants the “show your work” part to be non-negotiable.
What to watch next
Next are procedural milestones: a final legislative vote and the governor’s action. If implemented, agencies would have to calibrate the exact reporting formats, enforcement deadlines, and thresholds discriminating genuine “frontier” systems from fast-followers. Look for possible legal challenges on interstate reach and preemption, and keep an eye on whether other states import California’s model — as they did with privacy and auto emissions.
For the big AI labs, the Anthropic endorsement raises the game: resisting all forms of state action gets harder when a peer articulates an openness to credible safeguards. For startups that the revenue threshold and concern with catastrophic risk indicates relatively low near-term impact but any state standard is now a bar that trickles into platform, investor and partner expectations. Either way, for the first time, SB 53 has made the frontier safety discussion a reality, at least a short jump outside the realm of aspiration to the precipice of enforceable practice.
