Character.AI is shutting down open-ended chatbot conversations for users under 18, a dramatic change that will transform one of the most popular role-play A.I.s for young people. The company said it will add layered age verification; move teen accounts away from free-form chats to more structured, entertaining features; and introduce parental controls for direct messages between content makers and their young followers.
What Changes for Under-18 Users on Character.AI
Teen accounts will be unable to initiate or participate in open-ended interactions with A.I. figures. Usage caps will be introduced during the transition before chat is fully taken away. Current chat logs will continue to be accessible by their owners, though the company says those histories won’t seed unsafe content in any new features.

Instead of long-form role-play, the platform wants to provide short-form “A.I. entertainment” like audio and video stories created based on existing characters, as well as gaming modes that resonate with younger users. Company leaders cast the move as a proactive safety step at a moment of uncertainty about how extended, emotional conversations with A.I. will affect young people.
The Safety and Legal Backdrop for the Policy Shift
It comes as the company has faced growing scrutiny. The Social Media Victims Law Center is representing families who say their teens were adversely affected after prolonged engagement on the platform; in addition, it filed a wrongful death claim in one instance, along with other firms, and, with the Tech Justice Law Project, filed a lawsuit for three mothers whose daughters committed suicide following gloating over social media. Online safety researchers have also warned that the service produces violent or sexual content despite onboard filters, logging hundreds of disturbing examples in testing.
Families who sued called the move significant but long overdue, saying that product safeguards should not have followed mass availability to minors. Their attorneys lauded the decision while noting that it does not dispose of remaining claims. Advocates also cautioned about the possibility of psychological fallout from the abrupt removal of a tool that some teens had used for companionship, and called for off-ramps and mental health resources.
The company, which apologized to its teen users in the announcement, has said that the change is permanent and in line with a larger industry re-evaluation of high-engagement chat. Even OpenAI has warned that very long conversations can respond unpredictably, which has added to calls for conservative defaults with minors.
How Age Verification Will Work for Teen Accounts
Character.AI says it will use multiple layers of age verification. In-house models will calculate age signals, with support from an outside vendor that can verify identity if users directly refute a decision. The company says it plans to introduce other signals — for instance, whether elsewhere a person is verified as being over 18 — in order to lessen friction while increasing accuracy.
The sensitive data, which will also include government IDs, will be processed not by the platform but rather by the third-party verifier and won’t be stored, per the company. That approach is generally in line with advice from data protection authorities that urge companies to limit how long they retain data while fulfilling child-safety obligations.

A New Nonprofit and the Path Forward for Safety Research
Insulated from the changes, Character.AI will support an independent nonprofit, AI Safety Lab, involved in research on new protections for consumer-based A.I. The company said outside experts would help stress-test features targeting teens, including classifiers and experience design, with a direction to publish what it learns that may inform the broader industry.
The move puts Character.AI in the lead on what could be part of a new normal: limiting freeform A.I. chat for kids in favor of bounded experiences. Regulators have indicated that they want platforms to strengthen protections for young users. Simulated intimacy, late-night use and the repetition built into algorithms may also negatively affect mood or threaten healthy sleep — risks that could be exacerbated by 24/7 A.I. companions.
What Parents and Teachers Need to Know Right Now
Families can also expect prompts to confirm age and, in some cases, for identity verification through a trusted third party. Teens will see session limits before chat gets turned off completely. If a young person has learned to depend on A.I. companionship, clinicians suggest preparing alternatives — human support, moderated communities, creative outlets — that can help ease the transition and mitigate their feelings of sudden loss.
Simplifying content risk — but not eliminating it
For schools, the shift may well bring some simplification to navigating content risk, if not altogether eliminate it.
A.I. tools can be engineered to promote values and habits, so teachers will have to grow the schoolroom together with their students: inculcating context, critical thinking and the division between entertainment and recommendation. Clear handoffs to vetted mental health resources are still critical when students need support beyond the course load.
Why It Matters Beyond One App for Youth A.I. Use
Character.AI’s pivot is a milestone for a burgeoning category: A.I. companions that learn user preferences and mimic intimacy over time. Yanking teens from open-ended chat limits exposure to unpredictable outputs and smudges fewer lines between tool and confidant. Whether other platforms do, too, could help set the direction for the next stage of youth A.I. policy — and determine whether the industry can mature without introducing new modes of harm.