OpenAI is reining in ChatGPT’s interfacing with anyone under the age of 18, introducing teen-specific guardrails and fresh AI-literacy resources for families at a time when policymakers clash over baseline measures for minors. The update is part of an effort to bridge the divide between what the public expects and how chatbots actually behave, at a time when mental health risks, compulsive use and age-inappropriate content have parents concerned.
What OpenAI Changed in ChatGPT for Safer Teen Use
The company’s new Model Spec also tells its models to “interact with teens like you would in real life” and not as they would adults. ChatGPT may refuse immersive romantic roleplay, first-person intimacy, and any first-person sexual or violent scenarios — even if presented as fantastical, fictional, historical reenactment or in a classroom setting. Models are instructed to tread more carefully when it comes to body image and disordered eating, prioritize safety over autonomy when immediate harm is a factor, and reject advice that would help the average teen conceal unsafe behavior from caregivers.

OpenAI also teased an age-prediction model that would aim to determine whether an account is likely a minor or an adult, which would automatically apply even more restrictive settings. Also added in the updated parental controls are mentions of real-time automated classifiers that look at text, images and audio content for child sexual abuse material, self-harm material and other sensitive content. When prompts to display severe distress are generated, a human team that is trained in such content may review them and alert parents or guardians as appropriate.
The company released two AI literacy resources for parents and teens. They provide ways to help conversation flow, guidance on how to set boundaries and reminders that chatbots can make mistakes, can sound too confident and should not take the place of professional care. This kind of reflects that shared-responsibility model, where OpenAI is able to set very general policies blind to the context in which the model is trained or acting but then has a mechanism for families’ own home rules.
Enforcement Remains the Question for Teen Protections
Policy on paper is a beginning. Safety researchers have repeatedly warned of the dangers posed by “sycophancy” — where large language models mirror a user’s tone or are too quick to agree, even when it goes against guidelines. Research in both academic and industry labs has documented this behavior across model families, and parents fear the bots may extend risky conversations rather than interrupt them.
Recent incidents underscore the stakes. In one heavily reported case, a teenager named Adam Raine killed himself after chatting with a chatbot for months; the logs show hundreds of references to self-harm that were flagged by content filters but not followed up on in any meaningful way. That dichotomy — aggressive detection but reluctant intervention — is what made the break prompts, de-escalation levels and human-in-the-loop review wrought with numerical targets, not wishes.
OpenAI’s spec also rubs up awkwardly against another principle commonly invoked in AI design: that, if done responsibly, “no topic is off limits.” Youth counselors warn that such framing can shift systems away from safety and toward engagement in emotionally intense situations. The latest prompts from the company have ChatGPT refusing to “roleplay as your girlfriend” and steering users toward trusted adults when harm is involved — steps that child safety advocates have long lobbied for, but consistent adherence in the wild will be the real indicator.
Lawmakers Consider Guidelines for Minors
Regulators are moving, albeit unevenly. A bipartisan group of 42 state attorneys general recently called on leading tech companies to strengthen protections on chatbots targeted at children. In Congress, proposals include stronger labeling and parental notification rules to full-on bans against AI companions for minors. California’s SB 243, which is aimed at AI companion chatbots and includes clear limitations around self-harm content as well as sexual content, is starting to be seen as a model for how platforms should conduct themselves and communicate.

At an international level, the EU’s AI Act and the UK’s Age Appropriate Design Code have begun nudging providers toward age-aware design and a higher standard of safeguarding for young users. In the United States, the Federal Trade Commission has issued warnings that misleading safety claims may be considered illegal marketing. Legal experts in privacy and AI say that once companies publicly commit to certain safeguards, falling short can provoke both consumer protection and product liability risk.
Industry Dynamics and Youth Initiation in Generative AI
With its power to help solve homework, facilitate creative projects or provide entertainment, Gen Z has been one of the biggest groups to embrace generative AI tools. OpenAI’s latest entertainment deals might only encourage more teens to try out conversational models. That growth is prompting a search for design choices that reduce time-on-task — such as forced breaks, friction following sensitive prompts and gentle nudges to try working offline — even if those interventions conflict with engagement incentives.
Child-safety organizations like Common Sense Media have praised OpenAI’s steps to post guidelines specific to teenagers, contrasting that transparency with competitors whose policies have emerged only through leaks. Still, these groups also need proof: regular third-party audits, publication of red-team metrics on under-18 scenarios and transparent accounting for false negatives and escalation decisions.
What Families Can Watch Now as Teen Tools Evolve
Parents and educators can refer to OpenAI’s literacy guides for guidelines:
- Explain that chatbots are not people.
- Set time limits.
- Read over conversations together as desired.
- Steer teens toward trusted adults or professional resources when heavier topics come up.
Schools and districts should base their AI use policies around current mental health protocols and include an opt-in mode for students under 18.
The bigger issue is whether defaults aimed at teens — a preference for playing it safe, de-escalation and occasional reminders that it’s time to take a break — should be default settings across the board. Because suicide is still a leading cause of death among adolescents, according to the CDC page you linked, many of them believe that those protections should be standard regardless of age and just have greater additional layers on top for minors. In a statement, OpenAI explains that its approach is multi-staged and built to defend all users; practice will tell if the promise matches the practice.
