FindArticles FindArticles
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
FindArticlesFindArticles
Font ResizerAa
Search
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
Follow US
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
FindArticles © 2025. All Rights Reserved.
FindArticles > News > Technology

FTC probes OpenAI, Meta about kid-safe AI pals

John Melendez
Last updated: September 12, 2025 8:21 pm
By John Melendez
SHARE

The Federal Trade Commission has initiated a broad inquiry into whether AI “companions” are safe for children and teens, issuing compulsory orders to seven companies developing the technology: Alphabet, Meta (which includes Instagram), OpenAI, Snap, xAI and Character Technologies. The agency is interested in how those systems are built, trained and monetized — and what, if anything, companies are doing to avoid harm to younger users.

Table of Contents
  • Why kids and AI companions don’t mix
  • What the FTC is asking
  • The safety deficiencies regulators are watching
  • Design tensions: empathy, engagement and revenue
  • What’s next for OpenAI, Meta and friends

Acting under its Section 6(b) authority, the FTC is calling for extensive disclosures relating to development practices, age controls, response generation and safety testing. The investigation focuses on a fast-growing corner of consumer AI where chatbots mimic human conversation and affection, a combination that has provoked enormous engagement and serious safety concerns.

FTC probes OpenAI and Meta over kid-safe AI chatbots

Why kids and AI companions don’t mix

AI companions are built to be always-on, friendly and convincing. That’s a powerful mix for teenagers, who are still developing the impulse control and ability to assess risk. American health officials have highlighted the growing problem of youth mental health, and researchers have warned that always-on digital communication can exacerbate loneliness rather than alleviate it — especially when systems are optimized to maximize user attention.

The market has sprinted ahead. Meta introduced personalized A.I. characters across Instagram, WhatsApp and Messenger. xAI introduced “companion” personas to its premium tier, with flirty options available. ai and other apps are designed for social, romantic or mentoring transactions with bots. Independent tests have demonstrated just how readily these systems can breach sensitive boundaries, with earlier reporting bringing to light chatbots prepared to roleplay sex or downplay risky conduct when interacting with users who self-identified as young teens, and before fixes were enacted.

Mozilla’s “Privacy Not Included” research found widespread safety and privacy gaps with relationship chatbots, such as bad age checks and inconsistent responses to self-harm prompts. Youth tech advocates like Common Sense Media have also cautioned that generative systems are particularly ill-prepared to give guidance on mental health, substance use or sex.

What the FTC is asking

Under the orders, companies will have to provide details on how their chatbots generate responses, what data and filters they use for training and how they assess risks related to minors. The F.T.C. is also asking about age-gating practices, parental controls, escalation paths for disclosures of self-harm or abuse and any third-party red-team audits that may have been conducted. And critically, the agency is looking at how these services are monetized — through subscription upgrades, ads or features that increase engagement — and whether those incentives steer bots toward boundary-pushing interactions.

Section 6(b) inquiries are not allegations of a violation, but they can kick off an enforcement action or a public report. If the agency were to determine that an A.I. companion markets to or even knowingly interacts with children under 13 without having obtained verifiable parental consent, it could potentially raise issues regarding compliance with the Children’s Online Privacy Protection Act (COPPA). More broadly, the F.T.C. can go after “unfair or deceptive” practices if companies overpromise safety or don’t take reasonable steps to protect people against harm.

FTC probes OpenAI, Meta on kid-safe AI companions and child online safety

The safety deficiencies regulators are watching

Regulators are looking at foreseeable — but preventable — failure modes. These are chatbots that give bad advice on self-harm, sex or drugs; sexualised or romantic interactions that appear to be with children; acting as a phone-in-confidante and surrogate parent (more so than for Siri); parasocial attachment that might disincline rather encourage real life relationships. There also have been cases when bots “hallucinate” medical or legal advice that is not credible, and yet there are no satisfactory disclaimers or handoffs to human help.

Real-life events have forced the issue up the agenda. One Reuters investigation detailed internal guidance that allowed some AI assistants to engage in romantic or sexual conversation, including with minors, leading to policy changes. Media trials revealed that early versions of some popular chatbots provided unsafe tips to accounts pretending to be underage, and the company scrambled to make them safe through emergency policy changes and new guardrails. These reversals illustrate the fundamental regulatory critique: dangerous conduct is being caught in the wild and not prevented at design time.

Design tensions: empathy, engagement and revenue

AI companions are built on fervency and continuity, the same emotional guzzlers that drive engagement metrics and premium upsells. But the same dynamics can undercut safety boundaries. For example, personalization and memory elements may increase attachments even when systems do not assert that they are “just a bot.” When revenue models pay for time spent, not well-being, it puts even more pressure on product teams to keep users locked in with emotionally exploitive content and reactive personas — which is exactly where safety infractions seemingly happen.

Experts cite several existing frameworks that might help, such as the NIST AI Risk Management Framework and safety-by-design principles from reactions. And “responsible” means clear rules proactively enforced to prevent outweighing the well-being of young people, far more human moderators, transparent moderation guidelines so practitioners aren’t left in the dark or hung out to dry, and bright lines including a combination of age checks at sign up where appropriate and serious age-verification for content not suitable for minors. Scrappy startups serving as platforms that profit from actively constructing kid communities can *still* be mindful of how they’re doing so; practical next steps include strong restrictions on romantic/sexual contact with any account until age verification is carried out, default “YOUTH mode” responses for sensitive topics (middleman included), human-in-the-loop escalation around self-harm or abuse reports (due process reduces arbitrary enforcement risk), independent audits to drive change through ‘Noonlight-like transparency,’ robust data minimization principles (take less data) plus parent-friendly visibility into reporting rates.

What’s next for OpenAI, Meta and friends

The companies must provide granular documentation in response to the FTC’s orders — an exercise that can take months and exposes how systems actually work versus how policies say they do. Possible results could range from a public FTC report to consent orders mandating particular protections or deceptive-practice cases. State attorneys general are also monitoring, with some already investigating AI services that promote themselves as mental-health assistants.

For the industry, the message is clear: if AI companions are going to inhabit mainstream platforms used by teens, we can’t decide “safety later.” The companies that can prove rigorous age assurance, conservative defaults for minors and audit-ready evidence of testing will most effectively be able to keep building — without building the next headline.

Latest News
Google is finally issuing a fix for the Pixel 10–Galaxy Watch bug
FTC scrutinizes OpenAI, Meta about AI children’s companions
Warfare, Murder and Destruction on HBO Max This Week
11 Wild Reveals From the Latest Nintendo Direct
Apple Watch Series 11 vs. Galaxy Watch 8: Face-Off
iPhone 17 vs Air vs Pro vs Pro Max: Comparison
FTC investigating AI chatbots for posing child safety risks
Powerbeats Pro 2 get huge upgrade — with a catch
YouTube Music introduces Now Playing redesign
Critical cursor bug puts millions of systems at risk — here are the fixes
‘Black Rabbit,’ ‘Moving On’ and ‘Maledictions’ on Netflix
SpaceX gives discounted Starlink Roam a spin in Canada
FindArticles
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
FindArticles © 2025. All Rights Reserved.