OpenAI is implementing a stricter safety regime for those under 18 who are using ChatGPT, reinforcing controls on sexual content and discussions of self-harm, and restricting late-night access as the release also gives parents new tools to supervise teen usage. The company frames the move as an effort to provide powerful AI features while also delivering age-appropriate protections, even if that means playing it safe.
What is different for teens on ChatGPT today
The biggest change is in the way the chatbot deals with sensitive subjects. OpenAI says that ChatGPT will no longer flirt with users it thinks are underage and that tighter constraints will be put in place to lower the likelihood of sexual conversation. Discussions around self-harm will trigger proactive support responses; these may include a parent, or in severe cases, local emergency services.

These updates point to a growing sentiment that general-purpose chatbots should come with different guardrails for young people. Child-safety advocates have argued for years that teens are especially susceptible to persuasive or anthropomorphic systems that can escalate sensitive discussions or validate dangerous ideation. The new policy seeks to mitigate those risks without cutting off access for positive academic and creative purposes.
Parental controls and distress amplification
OpenAI is rolling out “blackout hours” that allow caregivers to designate the times when ChatGPT should be off-limits for a teen account — an option that families have requested as AI tools seep into homework and downtime. Parents who link accounts will be alerted when the system notices signs of distress, and can be called in to help soothe a crisis.
The distress protocol is a significant departure. Instead of just resources, it can let a trusted adult know and even call emergency responders if signs point to an immediate danger. Across the industry, companies have struggled with what is shaky ground: provide support without overstepping, intervene without invading privacy. OpenAI is suggesting that it will err on the side of intervention with minors.
Age assurance and the cost to online privacy
Technically, determining who is under 18 years old can be tricky. OpenAI says that it is working toward a longer-term system to decide whether a user is a minor and that it will err on the side of the more stringent rules when in doubt. The company urges families to connect teen accounts to a parent profile to minimize confusion and activate notifications.
Age-proof methods—from self-attestation to identity verification or probabilistic signals—impose varying levels of privacy burden. Regulators are demanding more and “proportionate” verification around youth protection, but civil liberties groups caution that over-the-top verification can add new risk around data. OpenAI vows to put safety first for teens while giving wide latitude to adults, highlighting a tension the entire sector is trying to negotiate.

There is growing legal and policy pressure
The policy shift comes amid heightened scrutiny. OpenAI is the target of a wrongful-death lawsuit that claims its protracted conversations with ChatGPT led to a teenager’s suicide, and competitor consumer chatbot company Character.AI is confronting similar litigation. On Capitol Hill, an investigation of AI chatbots by the Senate Judiciary Committee is focusing on youth safety and product design decisions that can heighten risk.
The spotlight is also on industry peers. A Reuters investigation last month found internal guidance that seemed to allow sex exchanges with minors on another platform and the company said in response that it changed its policy to disallow such interactions. Around the world, frameworks such as the EU’s Digital Services Act, the UK’s Online Safety Act and the US Children’s Online Privacy Protection Act are moving platforms towards age-appropriate design and risk reduction for younger users.
What this means for users and the market
For teenagers, ChatGPT will act more like a school-friendly assistant — still good for learning and creativity but more careful with sensitive topics and less accessible late at night if parents wish. For families, the new controls bring AI use forward in a more visible and time-bound way, nudging it further toward digital wellbeing practices many families have already begun to apply to social apps and games.
For OpenAI and rivals, this sets the bar for “youth mode” experiences. Look for wider implementation of parental controls, clearer protocols for crisis escalation and more cautious defaults when it’s unclear what age someone is. Education providers and school districts, which already have blocked AI tools on campus networks, are expected to consider these changes as a necessary condition of classroom adoption.
The more general lesson is that safety is not just filters on output; it’s product architecture. Blackout hours, distress detection and verified caregiver links are infrastructure options that recognize how teens actually engage with AI. Should they work as planned, they could serve as a prototype for safer generative systems across the field.
If you or someone you know is at risk, please seek help from a trusted adult, a crisis line, or 988 in the U.S. You can also visit our suicide prevention resources here: https://www.sbs.com.au/guide/article/2019/10/01/every-one-our-suicide-prevention-resources. If you are feeling troubled by exploitative images and video content on social media sites, contact eSafety on 1800 880196 (Australia) or your local eSafety office if outside Australia.