OpenAI introduced safety features for ChatGPT intended for users 13 to 17, promising to prioritize teenage safety over competing product objectives. The update adds a rule set to the company’s model (dubbed Model Spec) that focuses on youth, designed to alter how the chatbot behaves in high-stakes conversations and direct teenagers toward age-appropriate, lower-intrusion content.
What Changes for Under-18 Users in the New ChatGPT Update
For instances where users are detected as being younger than 18, ChatGPT will institute stricter guardrails across sensitive topics such as self-harm and suicide, romantic or sexualized role play, and requests to conceal dangerous behavior. Prevention, transparency, and early intervention will be the model’s watchwords, the company says — meaning offering safer alternatives when risk is imminent, saying no to harmful requests, and promoting offline help in moments of perceived risk.
- What Changes for Under-18 Users in the New ChatGPT Update
- Why OpenAI Is Rolling Out Teen Safeguards for ChatGPT Now
- Age Awareness and Enforcement for Teen Protections in ChatGPT
- Public Response and the New Youth Principles from OpenAI
- The Broader Safety Landscape and Regulatory Pressure Worldwide
- What to Watch Next as OpenAI Rolls Out Teen Safety Features
In practice, that means the assistant is programmed to reject risky prompts, explain why, and offer supportive, age-appropriate help. And for situations that include an immediate threat, OpenAI says ChatGPT will recommend calling emergency services or crisis resources rather than trying to take on the issue in-chat.
Why OpenAI Is Rolling Out Teen Safeguards for ChatGPT Now
OpenAI has been under growing legal, regulatory, and public pressure around children’s safety on the platform, including lawsuits that claim its chatbot mishandled suicidal thoughts. The company disputes the allegations in one of those cases, but the uproar shines a light on an industrywide problem: general-purpose AI tools are drawing users as young as teens before policy and product safeguards have had time to catch up.
The concern is not theoretical. In 2021, according to the Centers for Disease Control and Prevention’s Youth Risk Behavior Survey, 22 percent of U.S. high school students reported seriously considering attempting suicide, with above-average rates among girls and L.G.B.T.Q.+ youth. While technology is not to blame, clinicians and researchers have cautioned that chatbots can inadvertently normalize or amplify high-risk content if they aren’t designed in ways sensitive to youth contexts.
Age Awareness and Enforcement for Teen Protections in ChatGPT
OpenAI says it is early on in testing an age-prediction model for consumer accounts, which would turn on teen protections even if users don’t self-identify their age. Age prediction — used on everything from video-sharing sites to gaming networks — is still less than perfect. False negatives can allow minors to view material they shouldn’t; false positives can block access for adults. The company hasn’t shared what model accuracy will look like, escalation paths if signals are unclear, or how more than just parental controls could come down the line.
In an effort to address the knowledge gap for families, OpenAI is publishing two expert-reviewed guides on AI literacy for teens and parents. These resources aim to make clear how chatbots can be responsibly used, understand refusals, and seek support offline for when content feels upsetting or unsafe.
Public Response and the New Youth Principles from OpenAI
The American Psychological Association responded to the youth principles. “While AI has its advantages, adolescents require a balanced life of interaction with ‘humans’ to ensure the development of social relationships and good emotional health,” said APA CEO Dr. Arthur C. Evans Jr. That advice is in line with protections prompting users to turn to a trusted adult and not rely on the chatbot for crisis decision-making.
OpenAI also claims its newest model version, ChatGPT-5.2, makes mental health content safer. As with any system update, independent reviews and red-teaming by experts in child safety will be critical to confirm that the model properly recognizes, de-escalates, and routes risk across languages, slang, and Internet trends that change over time.
The Broader Safety Landscape and Regulatory Pressure Worldwide
Regulators are ratcheting up demands for youth safeguards. In the United States, the Federal Trade Commission has indicated it will also take a closer look at AI products that handle kids’ data and mental health. Under Europe’s proposed Digital Services Act, very large platforms would be ordered to assess and address systemic risks posed to minors. The Age-Appropriate Design Code in the UK already forced product changes across social apps, which were required to adopt privacy and safety by default for kids.
Beyond mental health, the National Center for Missing and Exploited Children has tracked multiyear increases in online enticement and sextortion — heightening the stakes around stopping sexualized interaction between adults and minors. Robust default refusals for sexual content and role play on teen accounts (along with swift detection and reporting) are now considered basic standards we expect all AIs to meet.
What to Watch Next as OpenAI Rolls Out Teen Safety Features
OpenAI’s teen-first approach will primarily be tested by transparency about measurement — the frequency with which guardrails get triggered, and false positive and false negative rates — as well as the outcomes when teens do receive crisis-oriented prompts, the company said.
Just as important is usability: If refusals feel dismissive or confusing, teenagers might look to less safe means.
It’s notable that the company has committed to prioritize teen safety, but lasting trust will depend on third-party assessment, clear disclosure, and rapid iteration when new risks emerge.
For now, OpenAI is acknowledging what the field has learned the hard way: when it comes to adolescents, safer by design is not a slogan — it’s an operational necessity.