FindArticles FindArticles
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
FindArticlesFindArticles
Font ResizerAa
Search
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
Follow US
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
FindArticles © 2025. All Rights Reserved.
FindArticles > News > Technology

OpenAI Announces Teen Safety Rules For ChatGPT

Gregory Zuckerman
Last updated: December 19, 2025 7:02 pm
By Gregory Zuckerman
Technology
8 Min Read
SHARE

OpenAI is reining in ChatGPT’s interfacing with anyone under the age of 18, introducing teen-specific guardrails and fresh AI-literacy resources for families at a time when policymakers clash over baseline measures for minors. The update is part of an effort to bridge the divide between what the public expects and how chatbots actually behave, at a time when mental health risks, compulsive use and age-inappropriate content have parents concerned.

What OpenAI Changed in ChatGPT for Safer Teen Use

The company’s new Model Spec also tells its models to “interact with teens like you would in real life” and not as they would adults. ChatGPT may refuse immersive romantic roleplay, first-person intimacy, and any first-person sexual or violent scenarios — even if presented as fantastical, fictional, historical reenactment or in a classroom setting. Models are instructed to tread more carefully when it comes to body image and disordered eating, prioritize safety over autonomy when immediate harm is a factor, and reject advice that would help the average teen conceal unsafe behavior from caregivers.

Table of Contents
  • What OpenAI Changed in ChatGPT for Safer Teen Use
  • Enforcement Remains the Question for Teen Protections
  • Lawmakers Consider Guidelines for Minors
  • Industry Dynamics and Youth Initiation in Generative AI
  • What Families Can Watch Now as Teen Tools Evolve
A close-up of an iPhone screen displaying the ChatGPT app page in the App Store, showing the app icon, title, and Get button.

OpenAI also teased an age-prediction model that would aim to determine whether an account is likely a minor or an adult, which would automatically apply even more restrictive settings. Also added in the updated parental controls are mentions of real-time automated classifiers that look at text, images and audio content for child sexual abuse material, self-harm material and other sensitive content. When prompts to display severe distress are generated, a human team that is trained in such content may review them and alert parents or guardians as appropriate.

The company released two AI literacy resources for parents and teens. They provide ways to help conversation flow, guidance on how to set boundaries and reminders that chatbots can make mistakes, can sound too confident and should not take the place of professional care. This kind of reflects that shared-responsibility model, where OpenAI is able to set very general policies blind to the context in which the model is trained or acting but then has a mechanism for families’ own home rules.

Enforcement Remains the Question for Teen Protections

Policy on paper is a beginning. Safety researchers have repeatedly warned of the dangers posed by “sycophancy” — where large language models mirror a user’s tone or are too quick to agree, even when it goes against guidelines. Research in both academic and industry labs has documented this behavior across model families, and parents fear the bots may extend risky conversations rather than interrupt them.

Recent incidents underscore the stakes. In one heavily reported case, a teenager named Adam Raine killed himself after chatting with a chatbot for months; the logs show hundreds of references to self-harm that were flagged by content filters but not followed up on in any meaningful way. That dichotomy — aggressive detection but reluctant intervention — is what made the break prompts, de-escalation levels and human-in-the-loop review wrought with numerical targets, not wishes.

OpenAI’s spec also rubs up awkwardly against another principle commonly invoked in AI design: that, if done responsibly, “no topic is off limits.” Youth counselors warn that such framing can shift systems away from safety and toward engagement in emotionally intense situations. The latest prompts from the company have ChatGPT refusing to “roleplay as your girlfriend” and steering users toward trusted adults when harm is involved — steps that child safety advocates have long lobbied for, but consistent adherence in the wild will be the real indicator.

Lawmakers Consider Guidelines for Minors

Regulators are moving, albeit unevenly. A bipartisan group of 42 state attorneys general recently called on leading tech companies to strengthen protections on chatbots targeted at children. In Congress, proposals include stronger labeling and parental notification rules to full-on bans against AI companions for minors. California’s SB 243, which is aimed at AI companion chatbots and includes clear limitations around self-harm content as well as sexual content, is starting to be seen as a model for how platforms should conduct themselves and communicate.

A close-up of a message input field with Message ChatGPT typed in, and a cursor pointing to a Search button with a globe icon. The background has been updated to a professional flat design with soft patterns.

At an international level, the EU’s AI Act and the UK’s Age Appropriate Design Code have begun nudging providers toward age-aware design and a higher standard of safeguarding for young users. In the United States, the Federal Trade Commission has issued warnings that misleading safety claims may be considered illegal marketing. Legal experts in privacy and AI say that once companies publicly commit to certain safeguards, falling short can provoke both consumer protection and product liability risk.

Industry Dynamics and Youth Initiation in Generative AI

With its power to help solve homework, facilitate creative projects or provide entertainment, Gen Z has been one of the biggest groups to embrace generative AI tools. OpenAI’s latest entertainment deals might only encourage more teens to try out conversational models. That growth is prompting a search for design choices that reduce time-on-task — such as forced breaks, friction following sensitive prompts and gentle nudges to try working offline — even if those interventions conflict with engagement incentives.

Child-safety organizations like Common Sense Media have praised OpenAI’s steps to post guidelines specific to teenagers, contrasting that transparency with competitors whose policies have emerged only through leaks. Still, these groups also need proof: regular third-party audits, publication of red-team metrics on under-18 scenarios and transparent accounting for false negatives and escalation decisions.

What Families Can Watch Now as Teen Tools Evolve

Parents and educators can refer to OpenAI’s literacy guides for guidelines:

  • Explain that chatbots are not people.
  • Set time limits.
  • Read over conversations together as desired.
  • Steer teens toward trusted adults or professional resources when heavier topics come up.

Schools and districts should base their AI use policies around current mental health protocols and include an opt-in mode for students under 18.

The bigger issue is whether defaults aimed at teens — a preference for playing it safe, de-escalation and occasional reminders that it’s time to take a break — should be default settings across the board. Because suicide is still a leading cause of death among adolescents, according to the CDC page you linked, many of them believe that those protections should be standard regardless of age and just have greater additional layers on top for minors. In a statement, OpenAI explains that its approach is multi-staged and built to defend all users; practice will tell if the promise matches the practice.

Gregory Zuckerman
ByGregory Zuckerman
Gregory Zuckerman is a veteran investigative journalist and financial writer with decades of experience covering global markets, investment strategies, and the business personalities shaping them. His writing blends deep reporting with narrative storytelling to uncover the hidden forces behind financial trends and innovations. Over the years, Gregory’s work has earned industry recognition for bringing clarity to complex financial topics, and he continues to focus on long-form journalism that explores hedge funds, private equity, and high-stakes investing.
Latest News
Google Pushes Mobile Gemini Transition To 2026
YouTube Bans Two Big AI Movie Trailer Channels
Roku Streambar SE Bundle Selling For 57% Off At Amazon
Samsung SmartThings First to Lead With Matter Camera Support
Ready Player Me Avatar Startup Acquired by Netflix
Known Launches Voice AI Matchmaking For Real Dates
LG 65-inch B5 OLED TV Receives a $100 Price Slash as Amazon Takes Chainsaw to Cost
YouTube Closes Popular AI Fake Trailer Channels
Viral Video Now Shows Waymo Driving Into Oncoming Traffic
Galaxy Z Flip 7 Hits an All-Time Low Price at Major Retailers
Amazon Slashes Over $450 Off the Philips 4400 Espresso Machine
Magic Avatar Collector Booster Box 15% Off
FindArticles
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
  • Corrections Policy
  • Diversity & Inclusion Statement
  • Diversity in Our Team
  • Editorial Guidelines
  • Feedback & Editorial Contact Policy
FindArticles © 2025. All Rights Reserved.