Character.AI will now turn off open-ended chatbot conversations for users under 18, as the web’s largest AI companion seesaws to a more restrictive, creation-first approach with greater age checks. The move comes amid increasing scrutiny of AI “friend” experiences for teenagers and coincides with new safety guardrails the company says will shape its next chapter.
What Is Changing in Character.AI’s Youth Chat Policies
The company will gradually wind down teen access to freeform chat, beginning with daily conversation limits that decrease to zero. Users under 18 will be steered toward more structured, creative tools, rather than open-ended back-and-forths with AI personas.

Character.AI is redrawing itself from a laboratory AI companion to a role-play and storytelling studio. The roster comprises Scenes for walking into curated stories, Streams for live character-to-character interactions, AvatarFX to transform photo-like images into animated clips, and a Community Feed where users can see created pieces. The goal is to move from conversation to creation, which lowers the risks entailed when AI acts as “a friend.”
Why The Company Is Changing Course On Teen Chats
AI companion chats have come under fire from parents, mental health campaigners and policymakers following high-profile cases and legal complaints which drew links between long chatbot exchanges and teenage self-harm. Though causality is murky, experts warn that systems fine-tuned for never-ending engagement can play into isolation, catastrophizing or risky content without strong guardrails.
Stark public-health data place the stakes in context. According to the World Health Organization, suicide is one of the leading causes of death for people ages 15–29 globally, making it crucial to treat digital products with care that could impact mood and behavior. The U.S. Surgeon General has called on technology companies to develop products for youth with “safety by design” in mind, drawing connections between online experiences and the well-being of teens.
Character.AI tells us the open-ended chat format — in which models ask users questions in an attempt to follow up and replicate companionship — is no longer its vision for youth. Instead, the company contends that structured, creative play contains clearer borders, more predictable content and fewer opportunities to reinforce harmful loops.
Stronger Age Checks And Potential Dangers In Verification
To help keep kids out of endless chatting, Character.AI will stack multiple age-verification methods: in-house behavioral signals, third-party verification through vendors like Persona, and, in cases where necessary, facial recognition and government ID checks. The stack is also designed to minimize evasion and increase accuracy in a global user base.
Privacy groups like the Electronic Frontier Foundation and the Future of Privacy Forum have long cautioned that aggressive age-gating can lead to increased surveillance, introduce bias and establish new data-security duties. False positives can also be a source of frustration for adults misidentified as minors. Character.AI will be evaluated not only on enforcement but also transparency: how data are kept, how long they’re stored and how users can challenge errors.

The regulatory climate is tightening. Bipartisan legislation in the U.S. Senate would prohibit AI companion products for minors, which follows parental complaints of sexual content and self-harm prompts from chatbot apps. California has become the first state to set standards for AI companion safety. State enforcement is coming as ever-evolving federal rules are far from set.
From Companion To Creation: Character.AI’s New Direction
Character.AI’s pivot doubles down on entertainment and participatory media—interactive stories, short-form AI video, game-like experiences. The company has presented this as a safer way to use generative models for young people: with specific aims and fewer potentially fraught one-on-one chats.
The move carries business risk. Teenagers have been a key growth driver for AI chat apps, and the company expects some churn as open-ended chat goes away. The challenge for it will be whether the new tools feel more liberating and creative than a constraint. It may also reduce moderation costs, since structured conversations are generally easier to score, filter and audit than sprawling, intimate interactions.
What It Means For The Industry And Competitor Responses
Rivals still allow for open-ended chats among teens, and some minors are bound to flock to them. But a high-profile platform stepping back from AI companionship could help establish a de facto norm: youth-safe AI where the focus is on creation, not around-the-clock emotional support. App store policies from Apple and Google already mandate increased protections for children, while Europe’s Digital Services Act would press platforms to evaluate the risks of youth use and tailor features accordingly.
Character.AI adds it will provide funding for an independent AI Safety Lab to focus on safety alignment work for agentic entertainment features. If Lab 6 publishes definitive benchmarks — like the amount of time it takes to escalate a crisis signal, prompt-block rates, or third-party audit results (like that received by OpenAI’s GPT-2 model) — it could help move this sector out of the world of promises and toward measurable safety outcomes, following frameworks like those laid out in NIST AI Risk Management.
Ending open-ended chats for children won’t solve all risks, and it will depend on real-world enforcement and how products are designed. But that’s a significant turn: prioritizing perimeters over tackiness, creativity over comradeship, at a time when the safety of young people is deciding the form of consumer AI to come.