California State Sen. Scott Wiener is back for a version 2 of his AI safety (this time disclosure-first rather than training-first) bill, squarely targeting the largest players in the field. His bill, SB 53, would compel high‑revenue AI labs to put out standardized safety reports on their most powerful models — forcing real substance behind the frequently vague “responsible AI” promises of Big Tech.
What SB 53 would likely force into the open
SB 53 aims at frontier systems that are run by companies earning in excess of $500 million a year. It involves publicly disclosing clear documentation on how models are tested and managed for risks of a catastrophic nature, such as facilitating mass casualty events, supporting sophisticated cyber-attacks or reducing barriers to chemical and biological weaponization.
- What SB 53 would likely force into the open
- Why California is acting on AI transparency requirements now
- The argument for mandated safety reporting
- Industry signals: from resistance to cautious engagement
- What’s at stake: catastrophic misuse, plainly put
- What would happen next in California if SB 53 is enacted

In addition to public reporting, the bill establishes protected channels for workers to report safety threats to state officials — an AI‑era equivalent of financial whistleblower protections. It also sets up CalCompute, a state‑operated cloud cluster designed to diversify access to compute for academic and public‑interest research, breaking a reliance on infrastructure controlled by the same companies that build and deploy the most powerful models.
Importantly, SB 53 does not create universal liability for downstream impacts. That’s a departure from Wiener’s previous effort, SB 1047, which let the spam-loving industry off easy despite being called in the press “the worst bill of 2018.” The new bill is about transparency and process rather than broad legal exposure — something that has quieted industry opposition without completely removing the policy’s vital organ.
Why California is acting on AI transparency requirements now
Wiener’s case is stark: The current strategy of waiting for Washington to pass a bunch of comprehensive AI rules is not working. Federal guidance has vacillated between safety and growth, and national guidance is mostly voluntary. Into that void, California — which has already established itself as a setter of global norms with the likes of privacy laws such as the CCPA — sees an opportunity to define what substantial AI transparency means for the most powerful systems.
Meanwhile, industry leaders have called for deference to federal standards. OpenAI has pushed one national rulebook as a reasonable ask of labs; venture firms have hinted at moribund Commerce Clause concerns about state‑level mandates. States regularly pass laws on product safety requirements and corporate disclosure, when doing so to protect people and not in a discriminatory manner – which would seem to put SB 53 on relatively solid legal ground if framed carefully enough here to avoid extraterritorial reach.
The argument for mandated safety reporting
Today, safety information is inconsistent and too often marketing‑driven. Some labs put out model cards, system cards, or “dangerous capabilities” studies; others report minimal information beyond high‑level assurances. The Stanford AI Index has also observed how the quality and comparability of safety assessments are uneven among vendors, making third‑party examination difficult. The AI Incident Database, a product of the Partnership on AI, has compiled hundreds of cases of real-world failures and harms: evidence that voluntary efforts to date are falling short.
SB 53 would also make disclosures in line with the NIST AI Risk Management Framework and evolving global practice routine. Concrete examples include threat models for catastrophic misuse; red‑team methodologies and coverage; bio, chem, and cyber assistance evaluations; adversarial testing results; post‑deployment incident tracking. Instead of prescribing how AI should be built, the bill requires companies to “show their work.”

There’s precedent abroad. The EU’s AI Act imposes requirements for managing systemic risk and transparency of high‑impact models. The U.K. has set up an AI Safety Institute to test model behavior in a standalone setting. SB 53 would put California’s largest AI developers on a similar schedule of disclosure, establishing a minimum level of transparency by which researchers, advocates, and enterprise customers can reasonably compare platforms.
Industry signals: from resistance to cautious engagement
Anthropic has publicly supported SB 53, casting it as a common-sense approach to safety without stifling innovation. Other labs have not yet endorsed the bill, but rather than the all‑out campaign that greeted SB 1047, they have remained silent. The distinctions are in scope: startups will by and large be out of reach, and the focus is on transparency rather than strict liability. That focus further corresponds to steps many firms already are taking — producing red‑team summaries, biosecurity assessments or cyber‑capability studies — but without a common standard, those reports remain apples‑to‑oranges.
What’s at stake: catastrophic misuse, plainly put
Independent and internal reports have concluded that general-purpose models can help inexperienced actors in sensitive areas. Assessments from bodies including RAND, OpenAI and Anthropic have demonstrated lower hurdles to basic wet‑lab planning or malware scaffolding in permissive scenarios — while safety filters block many direct requests. Cybersecurity data offers context: The cost and frequency of breaches are up year over year; automation is speeding up both defense and offense. The threat is not that models will invent new harms overnight, but that they can scale your access to them and the speed at which you learn how to wield those forms of power.
Clear, consistent disclosures will not eliminate those risks, but they add accountability. They are what let policymakers and others ask better questions, help customers distinguish vendors and help researchers identify gaps. In other industries — pharma, aviation, finance — transparency and reporting have gone hand in hand with safe speed. Wiener’s wager is that AI should be no exception.
What would happen next in California if SB 53 is enacted
It could be a gradual rollout: rulemaking to define the reporting templates and thresholds; coordination with state and federal science agencies to make evaluations consistent; early guidance on whistleblower protections, so as not to chill legitimate disclosures. CalCompute, if funded and constructed, could serve as a public‑interest counterweight backing independent replication of model tests and collegiate groups at the table.
The principle, for Wiener, is simple: When models attain abilities that might credibly contribute to mass harm, opacity is not a solution. SB 53 doesn’t pick through which research to greenlight or kill. It insists that AI’s biggest players show their math — and make it available for audit now instead of after a calamity.
