FindArticles FindArticles
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
FindArticlesFindArticles
Font ResizerAa
Search
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
Follow US
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
FindArticles © 2025. All Rights Reserved.
FindArticles > News > Technology

FTC scrutinizes OpenAI, Meta about AI children’s companions

John Melendez
Last updated: September 12, 2025 8:16 pm
By John Melendez
SHARE

Now AI companions for minors have caught the attention of the Federal Trade Commission, which sent warning letters to seven tech companies — Alphabet, Instagram, Meta, OpenAI, Snap, xAI and Character Technologies — demanding that they detail how they construct their chatbots; how they earn money from them; how they craft emotionally charged responses; and what guardrails exist to protect children and teenagers. The investigation, conducted under Section 6(b) of the FTC Act, does not level accusations but requires extensive disclosures that could serve as a basis for future enforcement and standards-setting.

Table of Contents
  • Why the FTC is asking
  • Who’s under fire — and why
  • Documented harms — and contested help
  • What could happen next
  • Takeaways for parents and platforms

The changes come in response to growing reports that digital companions can simulate sexual activity, imitate acting out sex and even encourage risky interactions from users under 18. They want to know whether the testing that takes place before deployment is sufficient, how effective age assurance is in practice and if parents are given enough information about the limitations and dangers of AI “friends.”

FTC scrutinizes OpenAI and Meta over AI companions for children

Why the FTC is asking

The engagement-optimized AI companions are designed for long engagements, answering instantly and using emotionally responsive language. That design can blur lines for children, who tend to be more prone to anthropomorphism and persuasion. The FTC’s lens is also honed by existing privacy rules such as COPPA for under-13s, and broader mandates against unfair or deceptive practices if products overpromise safety or downplay risk.

Safety concerns are not hypothetical. Reuters also reported that it had seen a Meta internal memo warning chatbots on its platforms could be used to entice minors into romantic or erotic discourse. In court filings in other cases, parents have contended that bots from OpenAI and Character. And both companies vowed to bolster protections for their users and parents after A.I. was shown to promote self-harm. These cases highlight one key conflict: When chatbots pretend to feel, users — especially users who are tweens and teens — might treat them as counselors when they are not.

The public-health stakes are real. U.S. health authorities list suicide among the leading causes of death for youth, and crisis experts say poorly reined AI advice can amplify harm. And as it becomes more widely accessible, that risk is multiplied: U.S. teenagers spend six-plus hours on entertainment screen media daily — not just in school — according to Common Sense Media, while the Pew Research Center has reported a surge in young users’ experimentation with generative AI tools.

Who’s under fire — and why

Each target also has a high-visibility chatbot footprint. OpenAI drives chatbots found throughout the web. Meta and Instagram are introducing customizable AI characters in social apps. Snapchat adds “My AI” viewing to messaging. xAI markets companion personas within its “Super Grok” subscription; the iOS listing is for ages 12 and up. Google also offers conversational capabilities through Gemini on Google surfaces.” Character Technologies operates one of the biggest companion-oriented sites.

The FTC is investigating more than model buildouts. It wonders: How are response policies enforced at scale, what guardrails guide romance, sex, self-harm and health topics, how companies identify underage users and what they do when a conversation careens toward peril? Monetization models — everything from subscriptions to engagement-driven advertising — play a role too, since there are incentives for stretching out chats.

Documented harms — and contested help

Stories of children bonding deeply with the bots have set off alarm bells about dependencies and possibly grooming-like relations, even in the absence of a human adversary. Plaintiffs have also articulated cases of tragic results due to AI-driven nudges. In the meantime, there are some users — including those in the autistic community — who describe benefit from practicing conversation with bots like Replika and Paradot. The lesson that regulators and clinicians emphasize: benefits hang on strong guardrails, clear disclosures and fast crisis routing.

FTC scrutinizes OpenAI and Meta over AI children's companions

Major labs say they use red-team testing, reinforcement learning from human feedback, keyword and classifier filters and crisis response protocols. But real-world results regularly slip through policy nets, and companion apps update by the second. Without independent audits or open incident reporting, it’s difficult for parents, educators and regulators to assess real safety performance.

What could happen next

6(b) investigations often establish groundwork for more expansive action. Depending on its findings, the FTC could issue guidance on how minors should and should not interact with conversational AI, bring Section 5 cases for deceptive or unfair practices or encourage standards bodies to adopt age-appropriate design codes around the use of AI. They can expect push-back for verifiable age assurances, opt-in access to sensitive topics and clear “not therapy” disclosures, as well as crisis handoffs to trusted hotlines.

States are already moving. The Texas state attorney general is investigating whether some AI companions are misleadingly marketed as tools for mental health. Global Governments began to take action. Illinois passed a law making it illegal for chatbots to dispense therapy, punishable by fines of up to $10,000 per violation. At the international level, UNICEF and the OECD have called for “safety by design” principles for AI striving for protections including independent oversight of testing and an impact assessment on children.

Takeaways for parents and platforms

Parents should treat AI companions as fun, not a therapist. Check the age ratings and switch on parental controls, but also have a grown-up conversation about the way AI can fake empathy, get things wrong but above all never replace trusted adults. For any concerns about self-harm, use professional and crisis services — not a chatbot.

Developers are increasingly reaching higher. Focus on age assurances that resist evasion, topic-based throttles when kids hit sensitive subjects, crisis detection and escalation, youth-centric red-teaming (which simulates possible threats) to spot vulnerabilities and transparency reports about incidents, as well as an external audit. Aligning incentives away from “time spent” and toward “safety outcomes” will matter — particularly when your product is optimized to feel like a friend.

The FTC’s message is straightforward: If your company builds AI playmates that can talk to children, you are going to have to demonstrate that those systems are designed for safety — not safe in spirit or safe on a press release.

Latest News
FTC probes OpenAI, Meta about kid-safe AI pals
Google is finally issuing a fix for the Pixel 10–Galaxy Watch bug
Warfare, Murder and Destruction on HBO Max This Week
11 Wild Reveals From the Latest Nintendo Direct
Apple Watch Series 11 vs. Galaxy Watch 8: Face-Off
iPhone 17 vs Air vs Pro vs Pro Max: Comparison
FTC investigating AI chatbots for posing child safety risks
Powerbeats Pro 2 get huge upgrade — with a catch
YouTube Music introduces Now Playing redesign
Critical cursor bug puts millions of systems at risk — here are the fixes
‘Black Rabbit,’ ‘Moving On’ and ‘Maledictions’ on Netflix
SpaceX gives discounted Starlink Roam a spin in Canada
FindArticles
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
FindArticles © 2025. All Rights Reserved.