FindArticles FindArticles
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
FindArticlesFindArticles
Font ResizerAa
Search
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
Follow US
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
FindArticles © 2025. All Rights Reserved.
FindArticles > News > Technology

ChatGPT to Introduce Age Checks in Effort to Protect Teen Users

John Melendez
Last updated: September 16, 2025 8:12 pm
By John Melendez
SHARE

OpenAI intends to add an age verification tool to ChatGPT, creating new rules for teenagers and possibly requiring some adults to prove they are at least 18. The chief executive, Sam Altman, cast the move as a “worthy tradeoff” to lessen harm, acknowledging that putting youth safety first will add some privacy friction for everyone.

Why age checks are arriving in ChatGPT now

AI chatbots have rapidly become mules for content too sensitive to discuss, from school stress to mental health issues. The explosion has highlighted the dangers to young users when models misread crises or otherwise come to see harmful content as a norm. Stanford University scholars have cautioned that AI “therapists” might overlook warning signs or perpetuate biases, and child-safety advocates are pressuring platforms to adopt tougher protections for minors.

Table of Contents
  • Why age checks are arriving in ChatGPT now
  • How OpenAI’s age prediction could be working
  • What changes for teens — and for adult users
  • The privacy trade-off and promised safeguards
  • Regulator pressure and industry precedent
  • What families, schools and developers should monitor
ChatGPT logo and age verification prompt for teen age checks

Newly emergent tragedies, and legal claims resulting thereof involving our kids’ interactions (these aren’t games with real-life consequences, folks) with AI have resulted in increased scrutiny. Regulators, clinicians and parents increasingly expect platforms to separate teen experiences from adult ones, and to keep chatbots that serve up chat about romance, explicit content or self-harm away from underage users.

How OpenAI’s age prediction could be working

OpenAI says it is generating an age prediction model that can guess a user’s age based on how they tap the service, and then redirect them to an appropriate experience. ChatGPT is meant for users who are 13 years old and up. The first threshold the system uses is to distinguish, pretty much, between 13–17-year-olds and everyone over, so when the model isn’t sure of age, it defaults to under-18.

In certain jurisdictions, OpenAI may require an official ID or similar verification in a form acceptable to us to access adult features. That mirrors strategies adopted elsewhere on the internet, where platforms are increasingly combining “soft” age estimation with “hard” checks in cases when law or policy demands greater certainty.

What changes for teens — and for adult users

OpenAI says the teen version of ChatGPT will not engage in sexually suggestive chatting, even when prompted to do so, and it will also refuse to talk about or create depictions of self-harm. When its systems detect a potential risk, the company said it might raise the issue to safety measures that involve surfacing crisis resources and, in extreme situations, contacting guardians or emergency services according to the company’s safety policies.

For grown-ups, the company is leaving more general creative and conversational latitude between safety railings. That can involve, for instance, assisting with the portrayal of difficult themes — such as where suicide is involved in a fictional story — when it is established beyond doubt that they are not being used as promotional or instructional materials. Some adult-only functions might be held behind age checks, so older users may also occasionally need to verify.

The privacy trade-off and promised safeguards

Age verification raises obvious privacy questions: What signals are being collected, how long are they stored and who can see them. OpenAI says it is building “leading-edge” security features to keep user data protected — even from most staff members, apart from in very specific instances such as verifying serious abuse, threats to life or some other policy violation that necessitates human review.

ChatGPT age checks: 18+ verification prompt with shield icon for teen user safety

Experts tend to favor data minimization: gather only what is necessary to ascertain age, hold it briefly and employ third-party checks when relevant. The balance is delicate. Freer age controls raise the risk of abusive interactions with minors, but add another layer of friction and represent a new data-handling vulnerability that must be managed in an open and transparent manner.

Regulator pressure and industry precedent

Regulators are circling. The Federal Trade Commission has requested information from AI companies about products known as “AI companions” and their protections for children. In Europe, the Digital Services Act compels big platforms to address systemic risks for children. The Age-Appropriate Design Code in the UK promotes proportionate “age assurance,” so that services can adapt protections if they know a user is younger.

Other platforms have already moved. Instagram has trialed Yoti’s facial age estimation tool to distinguish between teens and people who are over 18. Credit card checks, mobile carrier verifications, or ID in some regions for mature content are increasingly required for YouTube and Google services. By moving this way, ChatGPT is just getting AI assistants in line with these more general content and access controls.

What families, schools and developers should monitor

Parents can also anticipate more explicit “teen mode” defaults, as well as firmer refusals around romantic and self-harm content, and better crisis resources. It’s also smart to have a conversation with teens about what, exactly, the assistant can and cannot do, and to remind them that AI is not a substitute for clinicians, teachers or other trusted adults.

Schools and offices using ChatGPT may want to revise access policies, particularly when it comes to shared devices where age checks or teen defaults might be in use. Developers building on OpenAI’s APIs can expect more stringent age-gating and safety review regardless, especially for companion-style or wellness use cases.

The message is clear: the age of one-size-fits-all AI is over. If OpenAI can successfully mix meaningful teen protections with transparent, privacy-minded age assurance, ChatGPT could establish a new standard for responsible AI design — one that doesn’t infantilize its adult users but conjures a safe experience by default for the youngest among us.

Latest News
Project Kuiper Aims for Q1 Service in 5 Countries
Silicon Valley doubles down on AI training environments
YouTube Music gets countdowns and exclusives
Google unveils ‘Spotlight-like’ search for Windows
T-Mobile Starlink Video Chats: Field Test
Buy Galaxy S25 FE and get a $100 Best Buy Gift Card
Android Photo Picker gets search and some cleaner albums
iOS 26 Battery Drain: What Causes It and How to Fix
Rivian begins construction of $5B Georgia EV plant
Gemini No. 1 on App Store as Nano Banana fires up a 45% rally
936 Area Code: Location, Cities, and How It Works
Andrew Yang plans to pattern Noble Mobile after Mark Cuban
FindArticles
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
FindArticles © 2025. All Rights Reserved.