Anthropic has thrown its weight behind SB 53, a California proposal that would compel frontier AI developers to adopt clear safety practices and publish security reports before releasing powerful models. The endorsement is a notable break from much of the tech industry’s resistance and signals that at least one leading lab is ready to accept enforceable transparency rules rather than voluntary promises.
What SB 53 actually does
SB 53 targets the highest-capability systems—models from companies like Anthropic, OpenAI, Google, and xAI—by requiring documented risk management plans and public pre-deployment safety and security disclosures. It also establishes whistleblower protections for employees who raise safety concerns, an attempt to surface issues early rather than after deployment.

The bill centers on “catastrophic risks,” defining that threshold as the death of at least 50 people or more than a billion dollars in damages. In practice, that means preventing advanced models from providing expert-level assistance in areas like biological weaponization or high-impact cyberattacks, rather than focusing on consumer harms like deepfakes or bias alone.
Lawmakers narrowed the scope to the largest players by tying coverage to scale, including a gross revenue test intended to exempt startups. Recent amendments also removed a third-party audit mandate that industry groups called onerous, aiming to balance safety with feasibility.
Why Anthropic’s support is a turning point
Anthropic has long argued that the most sensible AI rules should be federal and risk-based, echoing recommendations from the National Institute of Standards and Technology’s AI Risk Management Framework. Its endorsement of a state bill underscores a pragmatic shift: powerful models are progressing faster than national consensus. Co-founder Jack Clark has said the industry cannot wait for a unified federal regime before putting guardrails in place.
Most frontier labs already publish some safety materials—model cards, red-team summaries, and responsible scaling plans. The difference with SB 53 is enforceability: public reporting becomes a legal requirement backed by penalties, not a best-effort blog post. For policymakers worried about safety commitments slipping as competitive pressure mounts, that’s a material change.
The pushback and the constitutional minefield
Trade groups including the Consumer Technology Association and Chamber of Progress oppose SB 53, warning of a patchwork of state rules and compliance costs that could slow innovation. Prominent investors have raised similar alarms, and policy leaders at Andreessen Horowitz have argued that broad state AI mandates risk running afoul of the Constitution’s Commerce Clause if they effectively regulate activity beyond state borders.

California’s governor previously vetoed a more expansive AI safety bill, SB 1047. SB 53 is narrower by design, focusing on transparency and extreme-risk mitigation and dropping the earlier third-party audit requirement. That trimming has drawn cautious praise from some skeptics. Dean Ball of the Foundation for American Innovation, who criticized SB 1047, said the revised approach shows a better grasp of technical realities and legislative restraint.
The bill also reflects input from an expert panel convened by the governor and co-led by Stanford’s Fei-Fei Li, signaling that the state is leaning on academic and industry expertise rather than crafting rules in a vacuum.
How it fits with the evolving safety playbook
SB 53 doesn’t attempt to rewrite federal efforts; it complements them. The White House’s AI executive actions already require reporting on large-scale model testing under national security authorities, and NIST provides voluntary risk guidance. California’s bill would localize that logic with state-level transparency triggers and whistleblower protections, creating a baseline for the biggest developers operating in the world’s largest tech market.
Real-world research has highlighted why extreme-risk safeguards matter. Government and independent red-teams have shown that, without controls, advanced models can boost novice capabilities in cyber intrusion workflows or provide stepwise guidance that edges into sensitive biological domains. Labs have responded with content filters, fine-tuning, and system-level restrictions—but consistency varies. SB 53 aims to make the “show your work” part non-negotiable.
What to watch next
The next milestones are procedural: a final legislative vote and the governor’s decision. If enacted, agencies would need to define the precise reporting formats, enforcement timelines, and thresholds that distinguish genuinely “frontier” systems from fast-followers. Expect potential legal challenges on interstate reach and preemption, and watch whether other states import California’s model—much as they did with privacy and auto emissions.
For the largest AI labs, Anthropic’s endorsement raises the bar: opposing any and all state action becomes harder when a peer publicly accepts credible safeguards. For startups, the bill’s revenue threshold and focus on catastrophic risk suggest limited near-term impact, though any state standard often spills over into platform, investor, and partner expectations. Either way, SB 53 has moved the frontier safety debate from aspiration to the brink of enforceable practice.