California’s new AI safety and transparency law, SB 53, is being cast as a compromise that shows smart guardrails can be erected alongside rapid innovation. Instead of prescribing a way to build models, the statute requires the state’s largest AI developers to be explicit about the safety protocols they say they follow and then actually follow those protocols — especially when it comes to catastrophic misuse hazards like attacks on critical infrastructure or creating biohazards.
That design choice is consequential. By making voluntary promises binding commitments, California is trying out a model of governance that matches the way high-growth tech companies already run their business — and doesn’t strangle research or product velocity.

What SB 53 Requires from Large AI Developers, for Real
SB 53 zeroes in on transparency and accountability for the largest-scale AI labs — those with the ability to build or run top-of-the-line systems. The law requires these companies to disclose how they evaluate and manage catastrophic risks, from model misuse to behavior enabled by models in cyberspace, and to preserve those protections as their systems evolve. California’s Office of Emergency Services will monitor compliance, in another nod to the efforts the bill suggests would be experienced throughout much of the state’s critical infrastructure and public safety system.
And the statute takes aim at a real-world pressure point: competitive dynamics that can cause firms to ease up on guardrails when a rival rolls out something riskier. The law serves as a backstop to prevent a race to the bottom by binding companies to safety baselines they have already established. It is less prescriptive than previous proposals, and it avoids sprawling mandates in favor of a narrow, high-risk target.
Why This Isn’t a Brake on AI Progress and Innovation
None of these ideas had originated in California. The National Institute of Standards and Technology AI Risk Management Framework has quickly emerged as a source for risk assessment, measurement, and governance. Top labs have fielded their own playbooks — Anthropic’s Responsible Scaling Policy, OpenAI’s Preparedness Framework, and Google DeepMind’s assurance programs — outlining red-teaming, incident response, and escalation thresholds. SB 53 essentially brings that direction of travel right into public policy for the narrow slice of systems which have outsize risk.
There’s clear evidence that clarity helps rather than hinders adoption: enterprise buyers repeatedly seek guarantees regarding model provenance, safety, and security before approving large AI deployments. The U.S. is still the leader in private investments in AI, and California gets plenty of the pie; clear rules reduce procurement friction and can speed go-to-market for vendors that meet them. In that context, baseline safety transparency doesn’t look so much like red tape and more like table stakes.
The Politics Around Preemption in State AI Regulation
SB 53 comes in the midst of a larger battleground over whether states should have any significant control in AI regulation. And some industry voices and political allies have outlined a 10-year pause on state regulation, as campaign filings reveal heavy spending to lift up pro-industry voices in key state races. A more recent federal proposal, the SANDBOX Act introduced by Senator Ted Cruz, would offer waivers to companies seeking to sidestep certain federal rules for extended periods — an approach critics say could serve as backdoor preemption of states.
Advocates like Encode AI, which united hundreds of groups against blanket preemption, call for a “floor, not ceiling” approach: establish federal floors and allow states the ability to deal with local threats. It’s a familiar model. California privacy law has jump-started national action, and the state’s auto emissions standards have long nudged cleaner technology across the market. In that lineage, I believe SB 53 is a nudged plan for high-stakes AI, but it’s not the entire roadmap.

The Chip Question And True Competitiveness
If the objective is to ensure that the U.S. leads other nations in AI, most of the leverage lies outside safety transparency. The number crunching of supply, export controls and domestic manufacturing matters much less to competitive outcomes. The CHIPS and Science Act appropriated about $52.7 billion to increase U.S. semiconductor capacity, and more recent proposals such as the Chip Security Act introduce tighter oversight to ensure advanced AI chips aren’t diverted to strategic rivals.
Policy whiplash hasn’t helped: after loosening restrictions on high-end GPUs, a later federal reversal permitted some chip sales back into certain restricted markets under the terms of revenue sharing. Firms with significant exposure to those markets have clear incentives to oppose tighter restrictions. But none of that is influenced by a Californian rule requiring big labs to document and retain catastrophic risk safeguards. It’s a category error to conflate chip geopolitics with light-touch safety transparency.
What to Watch Next as California Implements SB 53
Implementation will determine success. Cal OES will have to specify the thresholds — for which models and compute scales trigger obligations — harmonize reporting formats with widely used frameworks so that companies are not swamped by redundant paperwork, and establish a credible enforcement posture that gives remediation preference over punishment for those businesses that act in good faith. Coordination with federal agencies, like CISA on infrastructure risks and NIST on measurement, would also ensure that the compliance burden remains reasonable.
SB 53 probably won’t bite for startups, most of which are operating well beneath the scale modeled by frontier startups. For big developers, the benefit is commercial:
- More transparent safety attestations make it easier to get an enterprise to buy something.
- They help with insurance underwriting.
- They support sales into the public sector.
The law closes the gap between what labs say and do — without telling them how to innovate.
The California model won’t resolve every AI policy question, and it shouldn’t. But in aiming to manage maximum risks pragmatically, SB 53 shows that regulation and innovation can actually complement one another. In a world built on trust, trust is a competitive advantage, which means that clear safety is not a ball and chain — it’s gasoline.