California is taking a hard line on artificial intelligence, seeking to map how consumer-facing AI is built so that regulations that are enforceable are not far behind. The standout example: a new law whose focus is squarely on “companion” chatbots for teens, evidence that the world’s fifth-largest economy intends to write the practical playbook for AI safety rather than see it written in Washington or Brussels.
Under the bill, SB 243, which was signed into law Monday by Gov. Gavin Newsom, operators of AI chatbots must prevent minors from discussing sexual content with the software; provide clear and repeated warnings that an AI chatbot is not a human; and engage in proper crisis protocol when users mention suicide or self-harm. The law also dictates that companies must measure and report on risks, not only market shiny features — a change consistent with risk-based approaches advocated by standards bodies like the National Institute of Standards and Technology.
What SB 243 Requires of Companion AI Chatbots
The law largely aims at the mechanics that make companion chatbots feel sticky — and risky — for young users. Platforms should be required to provide age-appropriate content filters, create guardrails that prevent sexualized responses to minors, and offer teams of moderators trained to identify questionable behavior. Just as importantly, how to use specific response pathways when a user mentions suicidal ideation or self-harm — namely that they should be directed by the app to human support and crisis resources instead of engaging in casual conversation.
SB 243 is about more than product design. It would require annual reports on any identified connections between use and suicidal ideation — creating a feedback mechanism that regulators and parents can demystify. The law gives families a private right of action against developers who are found to be non-compliant or negligent, creating more legal exposure for companies that underinvest in safety measures. In practice, compliance will require good age assurance, red-teaming for safety failure modes, audit logs for sensitive interactions, and trained staff to intervene when things go sideways.
Why California Is Moving Now on Teen AI Chatbots
Child-safety advocates have been ringing alarms for months. Just yesterday, Common Sense Media issued the warning, “AI companions aren’t safe for teens under 18,” citing fears of sexual content, dependency, and blurred realities. Companion chatbots have attracted scrutiny from the Federal Trade Commission, which has opened an inquiry that summons details from players large and small (OpenAI, Alphabet, Meta, and Character Technologies among them) about how they monetize engagement, shape outputs, and nurture personas to ensure users keep coming back.
The tragic also drives the policy momentum. Character is being sued for wrongful death. A lawsuit claims a teenage boy spent extensive amounts of time interacting with another (based on its appearance, the companion bot pictured above) before dying by suicide, which has brought up issues of anthropomorphic design and manipulation. Public health data underscores the urgency: According to the Centers for Disease Control and Prevention’s Youth Risk Behavior Survey, more than one in five high school students seriously considered suicide last year. To regulators, AI companions represent a new angle on an old crisis — the sort that calls for circuit breakers when conversations become unsafe.
Internationally, too, that’s the direction of travel. The EU’s AI Act sets guardrails for systems that could be used to influence behavior, and Italy’s data protection authority has taken steps in the past to curb AI companions among young people. California’s decision is a signal to U.S. companies that state rules may come faster, and bite harder, than any federal regulations.
Implications for AI Companies Building Chatbot Products
For developers, the impact is felt operationally right away. No matter what we wish the world was like, apps with chatbots designed to be companions and general-purpose interfaces are still going to need risk controls catered specifically for children: persona designs that don’t create an illusion of intimacy; empathetic refusals that don’t normalize self-harm; automatic detection of crisis language; seamless handoffs to human help. Age verification needs to be executed responsibly in order not to result in an overreach of personal data collection and privacy — particularly concerning laws such as the California Consumer Privacy Act.
The law’s reporting requirements make measurement a core function of the product. Firms will be required to detail their testing protocols, safety incidents, and the resolution of mitigation efforts. That dovetails with NIST’s AI Risk Management Framework, which recommends continuous monitoring and a strong chain of accountability. Companies that already do adversarial red-teaming, keep model cards, and track all sensitive interactions will have a head start; those that don’t will experience the costs of trying to play catch-up.
California’s Broader AI Rulebook Is Taking Shape
SB 243 is one piece of a larger game plan. In parallel, California has moved legislation requiring AI labs to disclose foreseeable harms and lists of safety protocols — a push for transparency analogous to obligations in other high-risk sectors. The state has also taken action on nearby youth online safety fronts, including a new warning-label requirement for social platforms on addictive feeds and an age verification mandate that is scheduled to take effect in 2027.
This follows California’s 2023 executive order to agencies to assess AI risks and test model safety, as well as the state’s history of effectively setting national standards on privacy and consumer protection. Like with vehicle emissions and privacy, companies frequently default to the California requirements rather than ship one product to the state and another for everybody else.
What Comes Next for SB 243 and Companion AI Rules
Expect rulemaking on how to confirm age without over-collecting data, what qualifies as adequate crisis protocols, and how to measure “exposure” to harmful outputs. Case law is likely to be developed in the early stages of the process by the Attorney General’s office and civil litigants. At the federal level, the FTC examination could yield guidance or enforcement that dovetails with California’s trajectory, cutting back some of the compliance patchwork presently frustrating AI builders.
There is a broader message here, and it’s clear: In California at least, the days of AI safety by press release are over. Companies that send out conversational AI — and especially to teens — will be judged based on the systems they create to mitigate harm, the data they gather to show that it does, and their willingness to take responsibility when they fail.