Anthropic has endorsed SB 53, a proposal in California intended to establish transparency and safety baselines for the most powerful AI systems. The endorsement stands out in a debate in which many industry groups, Consumer Technology Association and Chamber of Progress among them, have said the bill would stifle innovation. From a frontier model developer, the move shows a practical readiness to embrace guardrails for high‑risk AI, particularly as federal action remains in doubt.
The firm framed its support as a recognition that newer models are advancing faster than consensus policymaking. Though it prefers federal standards to a patchwork of state rules, the message from Anthropic is that waiting for Washington could leave important cracks. IN ITS VIEW, SB 53 is a manageable means to codify practices responsible labs already follow.
What SB 53 would mandate
SB 53 takes aim at “frontier” developers – think OpenAI, Anthropic, Google and xAI – by mandating written safety standards and public safety and security reports before you can deploy high‑capability models. The goal is to make pre‑deployment risk assessments standard, not voluntary.
The legislation revolves around the prevention of catastrophic risks, involving events that could result in at least 50 deaths or greater than $1 billion in damages. That framing signals to focus on concrete abuse cases like expert‑level biological threat assistance or high‑impact cyberattacks, not quotidian harms like deepfakes or model sycophancy.
SB 53 includes whistleblower protections so that workers can raise safety issues without fear of retaliation. And to exempt small companies from sweeping doesn’t mean to stop enforcing, it’s just that it aims on the big players — those with more than $500 million in gross revenue — recognizing that extreme capability and deployment scale is therefore in a few companies.
Why Anthropic’s endorsement matters
It has been difficult to get industry aligned on A.I. safety regulation. By supporting SB 53, Anthropic is effectively arguing that the regulatory expense is burdensome but ultimately reasonable. The company already publishes model cards and red‑team results; codifying these and other disclosures would make voluntary norms into obligations, enforceable with fines for breach.
The endorsement could also change the political math. Lawmakers are often told that state regulations will scare investment away. A high-ranking developer championing state‑level accountability questions the story that any form of regulation is the death of competitiveness, and it could empower a coalition of researchers, civil society groups, and responsible‑AI teams behind concrete safeguards.
Opposition and constitutional questions
Trade groups and venture investors have cautioned that state mandates will splinter the regulatory environment and leave companies vulnerable to conflicting requirements. Matt Perault and Jai Ramaswamy, policy leads at Andreessen Horowitz, recently wrote that many of the state AI bills dangerously could violate the Constitution’s Commerce Clause by burdening interstate commerce.
The global affairs chief of OpenAI, Chris Lehane, called on California not to enact laws that might drive start-ups out of the state, though the letter did not mention SB 53. That stance led to a strong rebuttal by the former OpenAI policy researcher Miles Brundage, who argued that the concerns had misunderstood the bill’s scope. The text is plain: It targets the biggest companies, not early‑stage startups.
How SB 53 came from previous work
California’s earlier frontier AI bill, SB 1047, was vetoed after facing intense criticism from segments of the tech ecosystem. SB 53 is narrower. Lawmakers most recently struck a requirement for mandatory third‑party audits, which had been a top concern of industry about operational burden and confidentiality.
That recalibration has won cautious praise from some policy experts. Dean Ball at the Foundation for American Innovation —an early opponent of SB 1047—said that Senate Bill 53 was closer to a technically grounded and restrained draft, which makes its prospects for becoming law more likely. With the bill’s drafters also relying on an expert group convened by the governor, co‑chaired by Stanford’s Fei‑Fei Li, to align responsibilities with what labs can actually do.
What it means for California and beyond
California is home to the world’s leading AI labs and the greatest concentration of AI talent. Edicts hashed out in Sacramento tend to create ripple effects; privacy law is the obvious precedent. If SB 53 is enacted, it could serve as a model for other jurisdictions or as a reference point when federal agencies revise guidance such as the NIST AI Risk Management Framework.
Anthropic co‑founder Jack Clark has said that the industry can’t afford to wait until there is a perfect federal consensus around the sector and capabilities continue to develop. Relatively, then, SB 53 serves as the floor, rather than a ceiling—a set of minimum requirements for risk analysis, transparency, and internal escalation pending the next wave of frontier systems.
There are still steps to take in the legislative process. One final vote is still needed, and the governor has not said what he’d do after vetoing SB 1047 last year. But now a major lab is public in favor of SB 53 — and the bill is scoped to the highest‑risk actors — the center of gravity in California’s AI debate may be cooling toward codified, enforceable safety norms.