ChatGPT is about to get a little more personable and, for some, more intimate. OpenAI chief executive Sam Altman indicated that the company would roll out age-gating and a distinct adult experience, dialing back on filtering erotic or sexually explicit exchanges for verified adults while keeping in place more stringent safety measures for minors.
The move comes after months of user complaints that newer ChatGPT models felt flatter and less engaged. Altman recommended a restoration of the more expressive and “friendly” personality of earlier versions, along with new parental controls and clearer distinctions between youth and adult modes.

What Altman Is Promising for ChatGPT’s Adult Mode
OpenAI is expected to release a new version of ChatGPT that acts more like its previous ancestors — which many users preferred — but will switch on an age-gating feature that can be toggled by adults who want explicit content. The company perspective is simple: Treat your adult users like adults — without muddying things for younger audiences.
Practically, that means two things:
- A recalibrated conversational tone intended to be less sterile and offer more warmth and flexibility in tone.
- A gating lane for erotic roleplay and sexualized chat that should be inaccessible to those under 18 and is turned off unless age-verified.
OpenAI has already spent time developing safety-minded tools; these include enhanced parental controls and nuanced refusal behaviors for self-harm, harassment, and illegal acts. The adult one would be kind of over those guardrails, not a substitute for them, and age verification would be the linchpin of it.
Why OpenAI Is Recalibrating ChatGPT’s Personality
It is user engagement that’s the silent, hidden force behind this turn. After a round of model updates designed to curb sycophancy and ameliorate mental health risks, many power users said ChatGPT became less spontaneous and more evasive. That’s probably a safer route to go, but it also can feel robotic.
There’s also market pressure. Semi-companion-friendly social AI platforms like Character.AI and Replika have shown that the more emotional (and, at times, X-rated) conversations are, the better the retention. Similarweb has reported that Character.AI users spend well over 20 minutes a session on average, a proxy for the stickiness OpenAI craves.
At the same time, much of the public is skeptical. In 2024, Pew Research Center found that about one in five U.S. adults had experimented with ChatGPT, and a majority were worried about potential harms. OpenAI is attempting to thread that needle: reclaim the spark that made ChatGPT feel human while maintaining rigor around safety.
The Age Verification Challenge and Privacy Trade-offs
Reviving an adult mode depends on being able to verify a user’s age without sacrificing privacy.
Companies normally have a range of options that can be used when making these checks, for example:

- Document checks
- Third-party attestations
- Credit-based checks
- On-device estimates of age
Each has a trade-off in accuracy, friction, and data exposure.
Regulatory scrutiny is rising. Here, the EU’s Digital Services Act demands strong protection of minors and transparency concerning risks related to content; the AI Act introduces a duty of care when it comes to risk and monitoring at the model level. In the UK, the Online Safety Act compels platforms hosting adult content to have robust age checks. App store rules create an additional layer: app-based experiences allowing explicit content are subject to ratings and distribution restrictions.
But with gates, there are still issues concerning safety. Models can be drawn into boundary-pushing exchanges, users may work to evade restrictions, and cultural norms vary greatly by region. The Stanford HAI AI Index has found again and again that leading models continue to make banned outputs in red-team tests, highlighting the necessity for continuously tuning, along with human supervision.
What This Means For Both Users And Developers
For users day to day, the near-term impact will probably be a more vivacious assistant who is at home with conversational nuance and personality. Admitted adults who choose to may now access erogenous-themed interactions that are as yet barred to minors or unverified accounts.
Enterprises and educators will be watching closely. Companies and schools that use ChatGPT in the workplace or a classroom will need admin controls to turn off all of the adult features, as well as clear audit trails. OpenAI plans to continue playing it safe by giving the default experience a conservative tone when used for business and education.
Developers creating on top of OpenAI’s APIs will be searching for things like granular policy toggles. Tuning and system prompts already do tone and behavior policing; age-gating the content adds policy routing, regional compliance flags, etc., to make sure mature interactions are verified as happening inside context.
The Bigger Bet: One Model, Many Modes for Users
The message from Altman is a bet on segmentation: One model, many modes — tuned to audience and context. And if OpenAI can provide a warmer baseline personality while locking adult content behind reasonable age gates, it might recapture fans without throwing away hard-earned safety advances.
The devil is in the specifics, and we don’t yet know what those are. How age is confirmed, where adult mode is enabled, how consent and boundaries are indicated and enforced, how errors are resolved — all of that will matter more than any marketing tagline. If OpenAI does that well, ChatGPT may seem more human and better policed — a tricky equilibrium the entire industry is racing to find.