Meta is temporarily shutting off teen access to its AI characters across its apps as it readies a new version built specifically for younger users. The company says the pause is a response to parent feedback and part of a broader push to bake safety and supervision into AI experiences from the ground up.
When the updated AI characters return, Meta plans to add built-in parental controls and tighter guardrails on topics and tone. The company describes the new model as age-aware, tuned to offer age-appropriate responses and steer conversations toward areas like schoolwork support, sports, and hobbies.

The move arrives amid intensifying legal and regulatory scrutiny of youth safety online, including ongoing litigation that has spotlighted how social platforms design features for teens and how they assess risks tied to mental health and exploitation.
What changed and why Meta is pausing teen AI access
Meta previously previewed a slate of parental controls for its AI characters, including tools to monitor broad conversation topics, block specific characters, and even disable AI chats entirely. Rather than roll out those features incrementally, the company is now taking teen access offline while it finalizes a more controlled, teen-focused version.
Meta says parents asked for more visibility and stronger defaults. The updated characters are expected to limit the kinds of content and advice that appear in chat, emphasizing constructive, educational, and recreational prompts while filtering out mature or sensitive themes.
How the pause will work for teen AI characters across apps
The suspension applies to anyone who has provided a teen birthday and to people the company suspects may be underage using its age prediction technology. Meta has long said it uses multiple signals to estimate age when it lacks a verified birthdate, an approach designed to reduce easy evasion by misreporting birthdays.
Once the new system launches, Meta says parents and guardians will have clearer controls to manage if and how teens engage with AI characters. That includes the ability to turn AI chats off entirely, receive higher-level insights about interaction categories, and keep teen experiences aligned with household expectations.
The approach builds on recent Instagram changes that adopted PG-13-style filters for teens, restricting exposure to topics such as extreme violence, nudity, and graphic drug use. Similar thematic limits are expected to be enforced by the new AI characters by default.
Legal and policy pressures mount on Meta’s teen AI plans
Meta’s timing underscores mounting pressure from courts and regulators. A case in New Mexico accuses the company of failing to adequately protect minors from sexual exploitation, and reporting has indicated the company sought to narrow discovery requests involving social media’s impact on teen mental health.

Separately, the company faces a high-profile trial centered on alleged social media addiction, where CEO Mark Zuckerberg is expected to testify. These proceedings arrive as policymakers sharpen demands for youth protections in digital products, including AI features embedded inside social apps.
Beyond the courtroom, compliance expectations are rising. The EU’s Digital Services Act requires the largest platforms to rigorously assess and mitigate systemic risks to minors. The UK’s Children’s Code pushes design defaults that prioritize kids’ best interests. In the U.S., the Surgeon General has urged safety-by-design for youth online, and COPPA continues to govern data collection for children under 13.
Industry momentum builds around teen AI safety measures
Meta is not alone in revising teen AI experiences. Character.AI banned open-ended conversations for under-18 users before announcing kid-focused interactive stories. OpenAI introduced teen safety rules for ChatGPT and began estimating user age to apply content restrictions.
Across the industry, the direction of travel is clear: age-aware models, topic filters calibrated to developmental norms, and more transparent controls for families. The debate now centers on how well platforms can enforce age limits, reduce false positives and negatives, and provide independent evidence that safeguards work as intended.
What to watch next as Meta relaunches teen AI tools
Key questions remain. How accurately can age prediction catch misreported birthdays without sweeping in adults? Will parents actually use the controls, and will teens find workarounds? Independent testing and regular transparency reporting will be essential to demonstrate that the new system meaningfully reduces risk while preserving useful, age-appropriate functionality.
The stakes are high. Pew Research Center data shows that 67% of U.S. teens use TikTok and 62% use Instagram, with heavy daily engagement reported across platforms. Any change to how AI features behave for minors can ripple through how millions of families experience mainstream social apps.
If Meta can pair strong default protections with clear, accountable measurement of outcomes, its relaunch could set a benchmark for teen-focused AI on large social networks. If not, pressure from courts, regulators, and parents is unlikely to abate.
