FindArticles FindArticles
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
FindArticlesFindArticles
Font ResizerAa
Search
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
Follow US
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
FindArticles © 2025. All Rights Reserved.
FindArticles > News > Technology

California moves closer to regulating AI chatbots

Bill Thompson
Last updated: September 11, 2025 6:02 am
By Bill Thompson
Technology
7 Min Read
SHARE

California is poised to establish the first state law in the U.S. to regulate AI “companion” chatbots with SB 243, which passes through a pivotal hurdle on its way to a final concurrence vote before being sent to the governor’s desk. The bill would make developers of relationship-style A.I. — so-called relational agents, like apps that model romantic partners, confidants or friends — install guardrails and bear liability when those systems fail to protect vulnerable users.

What SB 243 would mandate

The bill aims at AI systems developed to engage in adaptative, human-like conversation and meet users’ social or emotional needs. It bans those systems from producing content that promotes or facilitates self-harm, suicide and sexualized exchanges. Providers would have to implement safety protocols such as crisis-alerting content filters and escalation flows, along with frequent reminders that users are texting with software, not a human.

Table of Contents
  • What SB 243 would mandate
  • Why momentum is building
  • What changed in the bill — and what didn’t
  • Industry pushback and compliance reality
  • The stakes for users and platforms
California State Capitol with AI chatbot icons, illustrating regulation effort

For minors, the law requires that the in-product alerts nudge users to stop playing after every three hours of gaming and emphasize that their involvement is artificial. Platforms also have to issue frequent transparency reports and reveal how frequently they direct users to crisis resources, a gauge lawmakers say is crucial to grasping the true scope of harm.

2. SB 243 Allows Users a Private Right of Action. Those who say they have been harmed by the violations could seek injunctions, damages of up to $1,000 per violation and attorney’s fees. That framework is constructed to lead to actual accountability for firms operating at consumer scale, and that includes the best-known players, OpenAI character. AI, and Replika.

Why momentum is building

Public pressure to contain companion bots has mounted over several high-profile slip-ups. The suicide of teenager Adam Raine, whose extended conversations with a general-purpose AI allegedly included discussing self-harm, spurred lawmakers into action. And additional reporting on internal documents indicated that some of the social platforms’ chatbots allowed for “romantic” or “sensual” exchanges with children, adding to the bipartisan unease.

Regulators are already circling. The Federal Trade Commission has raised red flags about AI and youth mental health, and it is collecting information to determine whether chatbots nudge toward harmful behavior. The attorney general of Texas has also launched inquiries into how specific A.I. services market and communicate with minors. On Capitol Hill, Republican and Democratic lawmakers have launched investigations into how well major platforms protect teenagers.

The broader context is sobering. According to the C.D.C.’s Youth Risk Behavior Survey, a large proportion of high school students seriously consider suicide, and the World Health Organization ranks suicide among the leading causes of death for young people around the world. Meanwhile, Common Sense Media and Pew Research Center have independently documented rapid adoption of generative AI among teens and young adults — exactly the sorts of people with whom companion-style interfaces are most popular.

California Capitol with chatbot icons signaling push to regulate AI chatbots

What changed in the bill — and what didn’t

Earlier drafts were tougher. Provisions that would have specifically forbidden “variable reward” mechanics — features such as streaks, memory unlocks and rare-response collectibles that can lure people into compulsive play — were jettisoned in negotiations. There were also reversals on rules requiring operators to log and publicly report every time a bot started a conversation about self-harm.

The remainder is still significant: a clear duty to avoid harmful content, repeated AI identity disclosures, requiring reports, and a mechanism for users to hold service providers responsible. Just as significantly, the law focusses on the distinct risks of parasocial intimacy and ceaseless talking, instead of doing what internet speech rules typically do, and applying the same generic content moderation rule set developed for social feeds.

Industry pushback and compliance reality

Technology firms have cautioned that state-level AI laws may result in a patchwork that could complicate product development. A similar California bill, SB 53, would apply broader transparency requirements; large platforms such as Meta, Google, and Amazon have called for a more light-touch federal approach, and there is division in the industry with Anthropic coming out in favor of stronger state-level transparency obligations.

Concretely, SB 243 would make companion-bot services grow up quickly. Age gating, granular policy enforcement and crisis-tolerant training data will be table stakes, as will red team testing and layered safeguards aligned with the NIST AI Risk Management Framework. Companies will also need to balance new state obligations for reporting and other functions with privacy limitations and the open question of whether Section 230 protects AI-generated products. The private right of action increases the likelihood that class-action suits will occur if policies do not work at a large scale.

The stakes for users and platforms

Chatbot companions are becoming stickier: users get attached, confide their problems and come back every day for support. Intimacy is precisely what makes safety failures so heavy — a bot quietly scripting a self-harm detail or sexualizing a teenager poses a greater risk than one-off toxic post in a social feed. By drawing a bright line around prohibited content and by demanding ongoing transparency, California hopes to make a dent in predictable harms without criminalizing the category.

If SB 243 is enacted, it will most likely serve as a de facto national standard. Big providers ship few state-specific models; rather, they push the tightest rules across their fleets. Other states and regulators will be watching to see if disclosure actually leads to prompt crisis escalation tooling, and reporting requirements actually mitigate risk — and if amelioration can continue operating with tighter guardrails.

Bill Thompson
ByBill Thompson
Bill Thompson is a veteran technology columnist and digital culture analyst with decades of experience reporting on the intersection of media, society, and the internet. His commentary has been featured across major publications and global broadcasters. Known for exploring the social impact of digital transformation, Bill writes with a focus on ethics, innovation, and the future of information.
Latest News
Meta Has Reportedly Postponed Mixed Reality Glasses Until 2027
Safety Stymies But Trump Backs ‘Tiny’ Cars For US
Startups embrace refounding amid the accelerating AI shift
Ninja Crispi Glass Air Fryer drops $40 at Amazon
SwifDoo lifetime PDF editor for Windows for about $25
Netflix to Buy Warner Bros. in $82.7B Media Megadeal
Beeple Reveals Billionaire Robot Dogs at Art Basel
IShowSpeed Sued for Allegedly Attacking Rizzbot
Save 66% on a Pre-Lit Dunhill Fir Tree for Prime Members
Court Blocks OpenAI’s Use of IO for AI Device Name
Pixel Watch Gets Always-On Media Controls and Timers
Wikipedia Launches Wrapped-Style Year in Review
FindArticles
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
  • Corrections Policy
  • Diversity & Inclusion Statement
  • Diversity in Our Team
  • Editorial Guidelines
  • Feedback & Editorial Contact Policy
FindArticles © 2025. All Rights Reserved.