FindArticles FindArticles
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
FindArticlesFindArticles
Font ResizerAa
Search
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
Follow US
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
FindArticles © 2025. All Rights Reserved.
FindArticles > News

California Takes Aggressive Approach To AI Regulation

Bill Thompson
Last updated: October 13, 2025 7:20 pm
By Bill Thompson
News
8 Min Read
SHARE

California is taking a hard line on artificial intelligence, seeking to map how consumer-facing AI is built so that regulations that are enforceable are not far behind. The standout example: a new law whose focus is squarely on “companion” chatbots for teens, evidence that the world’s fifth-largest economy intends to write the practical playbook for AI safety rather than see it written in Washington or Brussels.

Under the bill, SB 243, which was signed into law Monday by Gov. Gavin Newsom, operators of AI chatbots must prevent minors from discussing sexual content with the software; provide clear and repeated warnings that an AI chatbot is not a human; and engage in proper crisis protocol when users mention suicide or self-harm. The law also dictates that companies must measure and report on risks, not only market shiny features — a change consistent with risk-based approaches advocated by standards bodies like the National Institute of Standards and Technology.

Table of Contents
  • What SB 243 Requires of Companion AI Chatbots
  • Why California Is Moving Now on Teen AI Chatbots
  • Implications for AI Companies Building Chatbot Products
  • California’s Broader AI Rulebook Is Taking Shape
  • What Comes Next for SB 243 and Companion AI Rules
California state capitol with digital circuits, illustrating aggressive AI regulation

What SB 243 Requires of Companion AI Chatbots

The law largely aims at the mechanics that make companion chatbots feel sticky — and risky — for young users. Platforms should be required to provide age-appropriate content filters, create guardrails that prevent sexualized responses to minors, and offer teams of moderators trained to identify questionable behavior. Just as importantly, how to use specific response pathways when a user mentions suicidal ideation or self-harm — namely that they should be directed by the app to human support and crisis resources instead of engaging in casual conversation.

SB 243 is about more than product design. It would require annual reports on any identified connections between use and suicidal ideation — creating a feedback mechanism that regulators and parents can demystify. The law gives families a private right of action against developers who are found to be non-compliant or negligent, creating more legal exposure for companies that underinvest in safety measures. In practice, compliance will require good age assurance, red-teaming for safety failure modes, audit logs for sensitive interactions, and trained staff to intervene when things go sideways.

Why California Is Moving Now on Teen AI Chatbots

Child-safety advocates have been ringing alarms for months. Just yesterday, Common Sense Media issued the warning, “AI companions aren’t safe for teens under 18,” citing fears of sexual content, dependency, and blurred realities. Companion chatbots have attracted scrutiny from the Federal Trade Commission, which has opened an inquiry that summons details from players large and small (OpenAI, Alphabet, Meta, and Character Technologies among them) about how they monetize engagement, shape outputs, and nurture personas to ensure users keep coming back.

The tragic also drives the policy momentum. Character is being sued for wrongful death. A lawsuit claims a teenage boy spent extensive amounts of time interacting with another (based on its appearance, the companion bot pictured above) before dying by suicide, which has brought up issues of anthropomorphic design and manipulation. Public health data underscores the urgency: According to the Centers for Disease Control and Prevention’s Youth Risk Behavior Survey, more than one in five high school students seriously considered suicide last year. To regulators, AI companions represent a new angle on an old crisis — the sort that calls for circuit breakers when conversations become unsafe.

Internationally, too, that’s the direction of travel. The EU’s AI Act sets guardrails for systems that could be used to influence behavior, and Italy’s data protection authority has taken steps in the past to curb AI companions among young people. California’s decision is a signal to U.S. companies that state rules may come faster, and bite harder, than any federal regulations.

California state Capitol with AI circuit and gavel symbolizing strict AI regulation

Implications for AI Companies Building Chatbot Products

For developers, the impact is felt operationally right away. No matter what we wish the world was like, apps with chatbots designed to be companions and general-purpose interfaces are still going to need risk controls catered specifically for children: persona designs that don’t create an illusion of intimacy; empathetic refusals that don’t normalize self-harm; automatic detection of crisis language; seamless handoffs to human help. Age verification needs to be executed responsibly in order not to result in an overreach of personal data collection and privacy — particularly concerning laws such as the California Consumer Privacy Act.

The law’s reporting requirements make measurement a core function of the product. Firms will be required to detail their testing protocols, safety incidents, and the resolution of mitigation efforts. That dovetails with NIST’s AI Risk Management Framework, which recommends continuous monitoring and a strong chain of accountability. Companies that already do adversarial red-teaming, keep model cards, and track all sensitive interactions will have a head start; those that don’t will experience the costs of trying to play catch-up.

California’s Broader AI Rulebook Is Taking Shape

SB 243 is one piece of a larger game plan. In parallel, California has moved legislation requiring AI labs to disclose foreseeable harms and lists of safety protocols — a push for transparency analogous to obligations in other high-risk sectors. The state has also taken action on nearby youth online safety fronts, including a new warning-label requirement for social platforms on addictive feeds and an age verification mandate that is scheduled to take effect in 2027.

This follows California’s 2023 executive order to agencies to assess AI risks and test model safety, as well as the state’s history of effectively setting national standards on privacy and consumer protection. Like with vehicle emissions and privacy, companies frequently default to the California requirements rather than ship one product to the state and another for everybody else.

What Comes Next for SB 243 and Companion AI Rules

Expect rulemaking on how to confirm age without over-collecting data, what qualifies as adequate crisis protocols, and how to measure “exposure” to harmful outputs. Case law is likely to be developed in the early stages of the process by the Attorney General’s office and civil litigants. At the federal level, the FTC examination could yield guidance or enforcement that dovetails with California’s trajectory, cutting back some of the compliance patchwork presently frustrating AI builders.

There is a broader message here, and it’s clear: In California at least, the days of AI safety by press release are over. Companies that send out conversational AI — and especially to teens — will be judged based on the systems they create to mitigate harm, the data they gather to show that it does, and their willingness to take responsibility when they fail.

Bill Thompson
ByBill Thompson
Bill Thompson is a veteran technology columnist and digital culture analyst with decades of experience reporting on the intersection of media, society, and the internet. His commentary has been featured across major publications and global broadcasters. Known for exploring the social impact of digital transformation, Bill writes with a focus on ethics, innovation, and the future of information.
Latest News
Five Dollar Phone Repair Dongle Tested For Results
MLS AI Match Summaries Are a Hit With Fans
OneDrive Limits Facial Recognition Toggles
One Tap Gemini Summaries Come To Chrome On Android
Disrupt 2025 Final Flash Sale Save Up To $624 On Passes
Microsoft Limits IE Mode After New Exploits
Google Photos Tests Face Retouching Tools
Five Days Left To Book Disrupt 2025 Exhibit Tables
Google Photos Gets Album Chip For Quicker Organizing
The Impact of Windows 10 End of Support on Users
AI Adoption Accelerates, Yet Businesses Struggle for Gains
Apple Is Sued Over AI Training With Copyrighted Books
FindArticles
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
  • Corrections Policy
  • Diversity & Inclusion Statement
  • Diversity in Our Team
  • Editorial Guidelines
  • Feedback & Editorial Contact Policy
FindArticles © 2025. All Rights Reserved.