California has become the first state to pass a law that curbs the use of artificial intelligence by companies whose chatbots mimic minors, CNN reported. The new bill, SB 243, would codify safety duties like clear disclosure that people are talking to AI, bans on certain sensitive material, and some protections aimed at curbing harm in youth interactions. In a parallel act, Gov. Gavin Newsom vetoed another similarly described proposal he thought was too far-reaching and, in doing so, signaled the strategy of narrow means over broad bans.
What California’s SB 243 Requires of AI Companion Apps
The law requires AI providers to make it explicit that their chatbots are not human, resolving a major point of confusion in the “companion” use cases where systems develop ongoing intimate exchanges. Bots need to learn not to give instructions in response to suicide or self-harm and instead direct users wanting information on such matters toward crisis resources like the 988 Suicide & Crisis Lifeline. Providers will also need to file regular reports laying out how they identify, handle, and escalate cases with users who might be vulnerable — adding an accountability layer many safety researchers have been seeking.
- What California’s SB 243 Requires of AI Companion Apps
- The AI Companion Bill Newsom Vetoed and His Rationale
- Why California Moved First to Regulate AI Companions
- How Enforcement Might Play Out in Practice
- How This Fits With Broader Oversight and Regulation
- What to Watch Next as California’s AI Law Rolls Out
When it comes to minors, SB 243 goes even further. Chatbots need to prompt younger users to take a break after long chatting hours, not partake in sexual talk, and stop generating sexually explicit messages. These obligations target risks that are specific to always-available, hyper-personal generative systems that run the risk of eroding boundaries and normalizing harmful conduct over time.
The AI Companion Bill Newsom Vetoed and His Rationale
Newsom vetoed AB 1064, a bill that would have effectively banned companies from providing chatbots to children unless they could prove the bots will never talk about a long list of subjects. In his veto message, he wrote that he supported the goal of protecting children from harm but said the bill’s broad sweep could block minors from important learning and literacy tools. The split decision highlights California’s effort to shield kids without sidelining useful AI applications like tutoring and rehearsing language.
Why California Moved First to Regulate AI Companions
Fears of AI companions have been heightened by real-world harms. The parents of a teenager, Adam Raine, sued OpenAI in one well-publicized case when they found conversations about methods for suicide leading up to their son’s death. Though the company has since introduced parental controls and other safety features, the incident intensified calls for statewide standards that don’t hinge on voluntary offerings.
The stakes are high. Suicide remains a leading cause of death for U.S. teens, and tech-enabled access to self-harm content is an ongoing risk factor, according to federal health data. As the Pew Research Center reported, a large proportion of teenagers have already engaged with chatbots, usually for help on their homework or to satisfy curiosity. Child-safety groups like Common Sense Media have called on lawmakers to mandate “safety by design” in AI products available for use by young people.
Industry reaction indicates the new rules are workable: Clear guardrails can help promote more responsible development and deployment. Other big names — Google, Meta, Anthropic, Character, etc., and companies developing relationship apps similar to Replika — will probably have to audit mentions of break times, sexual-safety blocks for minors, and standardized crisis redirection at scale in order to comply with the law.
How Enforcement Might Play Out in Practice
Compliance will be based in three parts: product UX, policy, and model behavior. From the UX side, providers are going to require clear AI disclosures and (hopefully years down the line) clear, frictionless ways to escalate dangerous conversations to hyper-trained resources. As a matter of policy, companies need to have in place formal internal processes for catching at-risk users, documenting escalations, and auditing failures. Technically, models will need better “safety classifiers” that can identify and block prompts for self-harm and sexually explicit material with minors, alongside guardrails to thwart popular jailbreak techniques like role-playing or explanations that one is “writing fiction.”
Claiming to be a certain age is difficult in reality. Providers can use a risk-based approach: a combination of age-appropriate default settings, parental controls, and the minimum amount of light-touch age verification that will keep kids safe without collecting sensitive identity data. Apple and Google, whose parental controls and app store policies for young people are already changing, may play a crucial role in helping to create standards for these controls that apply across platforms.
How This Fits With Broader Oversight and Regulation
California’s law would bolster nascent efforts at the federal and international levels. The Federal Trade Commission has warned that deceptive AI design and unfair practices will attract enforcement. The European Union’s AI Act places transparency and safety obligations on general-purpose systems and higher-risk uses, as the United Kingdom’s design principles for children shape digital services’ treatment of minors. The California move on companions fills a gap by homing in on an often-seen, popular, high-risk use case not clearly addressed elsewhere.
What to Watch Next as California’s AI Law Rolls Out
Get ready for fast iteration: break reminders, crisis redirection, and sexual-safety filters are easy to do, but thwarting jailbreaks and measuring the real-world results will challenge providers. Annual reporting must bring detection rates, false negatives, and effective interventions to light — metrics regulators and advocates will be examining closely. Other states, meanwhile, might follow the California model rather than trying to implement sweeping bans, as they eye guardrails to keep minors safe while everyone else can maintain legitimate educational and social uses of AI companions.