OpenAI adds parental controls to ChatGPT to help kids and their families regulate AI use. The features are aimed at curtailing dangerous behavior, turning down the temperature on sensitive content and providing a direct path for parents to be notified when something seems seriously amiss — without transforming the app into a surveillance system.
Here is what the controls do, how to turn them on and cut them off, and how experts say parents and older children should think about artificial intelligence use at home.
- What parents can control in ChatGPT’s new safety tools
- How account linking works for supervised teen ChatGPT use
- Safety alerts when ChatGPT detects potential self-harm risk
- Data use and privacy choices for teens and their parents
- Expert reactions and context from safety and psychology
- Practical steps to get started with ChatGPT parental controls

What parents can control in ChatGPT’s new safety tools
When a parent and teen connect accounts, ChatGPT automatically places stronger content protections on what’s said under the teen profile. OpenAI estimates these limit exposure to graphic content, extreme beauty standards and sexual, romantic or violent roleplay. Parents can leave these protections on by default, and teens cannot turn them off.
Parents also have a level of customization for app functionality. That includes quiet hours that will shut down access at specific times, turning off memory (which means the model won’t be able to remember context for details it’s been given in prior chats), disabling voice mode and even taking image generation out altogether. These switches aim at the things which are most likely to break your immersion, or result in inappropriate content being shared, but do not block text chat entirely.
For the record, parents cannot read a teen’s back-and-forth chat history. OpenAI characterizes the controls as guardrails and time limits, not an archive of conversations. It’s an intentional design trade-off supposed to strike the balance between safety and privacy.
How account linking works for supervised teen ChatGPT use
To turn on parental controls, a parent sends an invitation to connect accounts, and the teen has to accept. Teens can also request the connection themselves. Either way, the parent is informed if a child later unlinks, so adults are aware when supervision loosens.
This consent model matters. It’s a reflection of a lot of schools and youth apps too, and it sets an expectation that even AI use is a family choice — not an invisible tracking regime. In general, families will want to have a short setup conversation, so teens are clear on what’s being curtailed and why.
Safety alerts when ChatGPT detects potential self-harm risk
According to OpenAI, ChatGPT can spot warning signs that a teen might be contemplating self-harm. If certain signals are identified, a trained human team reviews the context and may alert the parent by email, text or push alert when a situation reads as particularly risky. The system will sometimes set off a false alarm, the company notes, but it is better to respond than not.

OpenAI is also developing ways to alert emergency services when a child’s parent cannot be reached or when there is an immediate danger. That approach fits within broader advice from mental health professionals, who say rapid escalation to humans is a must, and AI should act as a bridge to care rather than a substitute for it. The Centers for Disease Control and Prevention reports that rates of depressive symptoms among high-school students remain high, with more than one in five saying they have seriously considered suicide on recent surveys — context that illustrates why timely alerts can count.
Data use and privacy choices for teens and their parents
Teen accounts are included in training by default, which allows for the possibility that this data could be used to improve OpenAI’s models. Parents who don’t want that have to opt out in settings. This is an important step: some families think training doesn’t apply to child accounts, but that is not true unless you manually alter the setting.
Back up to the memory toggle for a deeper level of control. When memory is turned off, ChatGPT should not hold details from one conversation to the next, which lowers the risk of residual private information left in context. Turning off voice/image generation can also reduce exposure to content that is more difficult to review or justify.
Expert reactions and context from safety and psychology
Child-safety and media-literacy organizations have lobbied for tougher protections for teens on all AI platforms. A top executive at Common Sense Media described OpenAI’s controls as only a starting point, and encouraged further investment in crisis protocols and default protections.
During a recent Senate hearing, experts and the American Psychological Association’s chief of psychology called for lawmakers to mandate independent safety testing of AI tools available to minors and curb manipulative design that maximizes engagement. That point of view intersects with concerns raised recently in a high-profile lawsuit that claims ChatGPT had an inappropriate response when a teen talked about self-harm, as deemed by the program. The message from advocates is clear: AI systems should be designed to minimize risk and direct vulnerable users to real people and professional assistance.
From organizations like Pew Research Center, we meanwhile have surveys that suggest fast-rising awareness and experimentation with generative AI among teens (and their less technologically savvy parents) here in 2022, making pragmatic easy-to-use controls a matter of priority rather than a niche feature.
Practical steps to get started with ChatGPT parental controls
- Link accounts and check the default content protections before a teen’s first chat.
- Turn on quiet hours for school nights and bedtime; consider disabling image generation for younger teens.
- If your family doesn’t wish interactions to be used for training, opt out in settings.
- Clearly state the rules: what is allowed, what is not and what will happen if ChatGPT flags a risk.
- Explain to teens that parents are not able to access chat logs, but will be alerted if the system detects a temporary safety issue.
- Revisit settings as teenagers mature and their schoolwork or extracurricular needs evolve.
The rub: ChatGPT’s parental controls are not a substitute for direction from caring humans, but they can help make everyday use safer and more predictable. Or make them part of a more comprehensive family plan around technology — one that focuses on privacy, mental health and open dialogue.
