OpenAI is hinting that age verification may be involved in using ChatGPT, as the company is investigating automatic age detection and a possible stipulation on adults proving they’re over 18 in order to access the full experience. The early glimpse at potential protections, which CEO Sam Altman detailed in a company post, comes as generative AI’s intersection with youth safety is coming under increasing scrutiny.
Why the UK is considering age checks for AI users
Pressure is coming from two directions: regulation and danger. Regulators in many regions have pushed to raise the bar on protecting minors online. The Federal Trade Commission enforces COPPA in the United States, which limits the collection of data for children under 13. In Europe, the Digital Services Act mandates that Big Tech assess and remediate risks to children. The Online Safety Act for the United Kingdom is expected to establish detailed age-assurance requirements regulated by Ofcom and informed by the Information Commissioner’s Office.

Meanwhile, lawmakers are responding to real-life harms. Lawmakers recently heard testimony from the family of a teenager, Adam Raine, who died after using OpenAI’s GPT-3 model; his death has spurred a wrongful death lawsuit against OpenAI and renewed calls for stronger guardrails on AI systems. Against that backdrop, OpenAI’s signals around potential age checks also mirror a wider industry move towards “safety by design”.
What OpenAI has hinted at so far on age checks
Altman discussed “work on an automatic age detection system to automatically direct users under 18 to a safe version of ChatGPT.” He also said adults might have to prove their age someday to use the unrestricted product. The company offered no timeline, detection method or proof types that will be accepted.
At the same time, OpenAI has said it plans to build parental controls for the system that would feature linked family accounts by which parents could limit what children under 13 can access and receive summaries of conversation topics if the system flags them as potentially upsetting. The company says it could elevate to authorities if a parent or guardian can’t be reached in emergency situations — a policy that will undoubtedly provoke conversation and, likely, some kind of legal action among privacy hawks and child safety advocates.
The trade‑offs of age verification online
There is no perfect way to verify age on the internet. Identity verifications using government IDs can be robust but they present clear privacy and data security issues; verifications via credit cards screen out unbanked users as well as many teens; mobile carrier lookups are of differing availability by country; and selfie-based facial age estimations eschew ID storage while introducing biometric processing and challenges regarding possible bias. Digital rights groups, including the Electronic Frontier Foundation and the American Civil Liberties Union, have cautioned that poorly designed age verification could chill speech and widen surveillance.

These options have been tried out by big platforms themselves. For example, Instagram and TikTok have experimented with facial age estimation from third-party providers such as Yoti. The UK Information Commissioner’s Office has stressed proportionality: systems should collect the minimum information essential, be transparent, and offer alternatives for those who will not share sensitive data. Any OpenAI approach will be evaluated along four dimensions: necessity, adequacy, minimization of data, and control over users.
Automatic age detection isn’t straightforward
Age “detection” without a checked input is more often performed by behavioral factors: language style, time of usage or interaction, for example. Academic research has hinted those indicators are noisy and easily gamed. A high school senior and a first-year college student can appear identical to an algorithm; users who speak multiple languages and neurodiverse users all challenge the best classifiers. False positives might trap adults in a walled-off mode, while false negatives could expose teenagers to content intended strictly for adult users. Any such system will require clear pathways of appeal, transparency about error rates and external auditing.
What a “restricted” ChatGPT might look like
OpenAI has not specified what the under‑18 experience would subtract or add. Defaults could range from tighter content filters and more conservative answers to sensitive questions, optional study aids optimized for schoolwork and featured crisis resources in case children post about mental health or issues of safety. Browsing, plugin access and image generation might be restricted or blocked for younger users, just as “supervised experiences” are available on video platforms.
What matters next for OpenAI’s age verification plans
The best part is yet to come. How will adults confirm age — ID, biometric, or something else? Can parents get visibility into, or control of, their children’s settings without prying into private conversations? For how long will any verification data be retained and by whom? Can third-party auditors and independent researchers get to test the safeguards? Those questions will help decide whether OpenAI’s plan becomes a privacy-preserving model of safety — or another flashpoint in the child-safety-versus-privacy debate.
For now, the company’s line is that there are stronger youth protections on the way and that even harsher access controls could be implemented. With the regulatory environment as it stands, and the stakes for families, ChatGPT’s journey to age verification feels less hypothetical than a question of execution.