FindArticles FindArticles
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
FindArticlesFindArticles
Font ResizerAa
Search
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
Follow US
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
FindArticles © 2025. All Rights Reserved.
FindArticles > News > Technology

FTC investigating AI chatbots for posing child safety risks

John Melendez
Last updated: September 12, 2025 8:02 pm
By John Melendez
SHARE

The Federal Trade Commission has begun a broad investigation into how major technology firms develop, market and deploy AI algorithms in the chatbots that millions of American children use. The agency sent mandatory orders to leading platforms, demanding detailed answers about protections and data practices and the real-life risks of AI “companions.”

Table of Contents
  • What the F.T.C. is seeking
  • The legal backdrop: COPPA and beyond
  • Why A.I. companions pose fresh challenges
  • What platforms might need to change
  • What it means for schools, parents, developers

At the center of this investigation is a simple question with complex implications: Are chatbots that have been designed for younger users safe, or are these minors making use of untested systems that could mislead them, manipulate them or mishandle their data? The F.T.C. is signaling it wants specifics, not slogans, on how companies are reducing harm.

FTC investigation into AI chatbots' child safety risks

What the F.T.C. is seeking

Investigators are pushing companies on how they design chatbot behavior with minors in mind — especially when the bots function as “companions” that engage in open-ended, intimate conversations. Questions includes age gating and parental controls; default settings; ways content moderation is managed on the service, as well as whether products are designed to minimise exposure to bullying, sexual content, self-harm prompts or exploitation.

The agency also seeks insight into how developers verify the safety of their apps before and after launch through measures like red-team testing, adjustments for sensitive content and guardrails that prevent jailbreaking. Another priority: transparency. Firms are being questioned about how they educate parents and young users about risks, limitations and the provenance of chatbot responses.

Data practices come directly into scope. The F.T.C. is investigating what information is taken from children, how long it is kept, whether it is used to train models and keep in touch with minors — and what safeguards the industry has against re-identification. That scrutiny extends to APIs powering AI chat programs inside games, education apps and social platforms popular with minors.

The legal backdrop: COPPA and beyond

COPPA prohibits companies from collecting personal information of children under the age of 13 without verifiable parental consent and requires privacy-by-design principles such as notice, disclosure and data minimization. The FTC enforces COPPA and has expressed interest in updating the rule to accommodate data flows and persistent identifiers in the age of AI.

Enforcement has been robust throughout the kids’ tech ecosystem at large. Epic Games settled C.O.P.P.A. charges related to Fortnite for a $275 million civil penalty, Amazon agreed to pay $25 million over Alexa recordings of children and Microsoft coughed up $20 million in dealing with Xbox sign-ups for children. The new AI investigation shows that the commission is ready to apply similar rigor to conversational systems.

Be­yond COPPA, the FTC can also act against un­fair or de­ceitful prac­tices under Sec­tion 5 of the FTC Act—such as mislead­ing claims about safety fea­tures, dark pat­terns that brink on ov­er-lymphatic shar­ing, or a fail and to ad­dress foreseetable harms. That gives regulators the space to act even when products are not explicitly labelled for children but are commonly used by them.

Why A.I. companions pose fresh challenges

1Generative AI models are probabilistic and might ‘hallucinate’ — generating false or unsafe advice with high confidence. When chatbots take on caring or always-on personas, young users — already in the habit of asking Google everything and ever more accustomed to speaking their thoughts out loud thanks to virtual assistants built into smartphones — could attach themselves and follow advice that seems credible but isn’t rooted in expertise or evidence.

Researchers and standards committees have warned of the need for caution. The NIST AI Risk Management Framework emphasizes the need for context-based testing and continuous monitoring. UNICEF’s Policy Guidance on AI for Children promotes privacy-by-design, age-appropriate disclosures and important safeguards for children who may be particularly vulnerable.

FTC seal with AI chatbot icon and warning sign highlighting child safety risks

There is also the risk of third-party ecosystem exposure. A bot that maintains a semblance of responsibility on the company’s main app doesn’t necessarily behave in the same way once it’s embedded into an edtech tool, a roleplay game, or some obscure companion app with shadier safety layers. This fragmentation is acknowledged by the FTC’s emphasis on developer policies and downstream enforcement.

What platforms might need to change

Look for pressure for more rigorous age verification, not just self-attestation. Companies could be encouraged to enable parental controls by default, shorten data retention periods for minors, and turn off certain features like unfiltered image search results generation or sharing a location or engaging in suggestive roleplay when a user is under 18.

Lives could be on the lineInjury assessments might get tougher and more public. That involves testing models alongside “child persona” prompts, sensing rates of unsafe response and charting how rapidly systems identify and block jailbreaks. Independent audits and incident reporting — which are the norm in other high-risk industries — could become a de facto standard of credibility.

Training pipelines may come under scrutiny.

Firms might have to show that kids’ data is not being used to train models unless parents give permission, and that synthetic or public datasets are scrubbed of any age-suspect content. Clear, straightforward notices to the parents will be crucial.

What it means for schools, parents, developers

Schools that have been quick to embrace AI tutors will now encounter more rigorous due diligence. Procurement officials should request documentation of model testing, data flows and the ability to disable risky features. Matching the FTC’s expectations—and student privacy laws like FERPA at the district level—will become a competitive asset for edtech vendors.

Parents: The inquiry is a reminder that A.I. chatbots are like any powerful media tool. To treat them as you would a smartphone or video game, tighten the controls, review chat histories if possible and establish norms around what questions can be asked of a bot.

Transparency dashboards and safety labels, if required or widely adopted, could help make decisions like these easier.

The message to developers is clear. Safeguards claims need to be evidence-based, privacy practices should adhere to both the letter and spirit of COPPA, and systems need to be built with children’s rights in mind from the start. The age of “ship now, fix later” is crashing into child protection law — and the FTC seems prepared to explore where the line lies.

Latest News
iPhone 17 vs Air vs Pro vs Pro Max: Comparison
Powerbeats Pro 2 get huge upgrade — with a catch
YouTube Music introduces Now Playing redesign
Critical cursor bug puts millions of systems at risk — here are the fixes
‘Black Rabbit,’ ‘Moving On’ and ‘Maledictions’ on Netflix
SpaceX gives discounted Starlink Roam a spin in Canada
The National Parks With the Best (and Worst) Internet
ChatGPT coupon trick saved me 25% on take-out dinner
US EV Sales Hit a Record. Can It Hold?
AT&T-Gigs pact brings phone plans into your apps
Scale AI rival Micro1 raises $35M at $500M
Apple: iPhone Air Won’t Bend, It’s Ultra-Thin
FindArticles
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
FindArticles © 2025. All Rights Reserved.