The Federal Trade Commission has begun a broad investigation into how major technology firms develop, market and deploy AI algorithms in the chatbots that millions of American children use. The agency sent mandatory orders to leading platforms, demanding detailed answers about protections and data practices and the real-life risks of AI “companions.”
At the center of this investigation is a simple question with complex implications: Are chatbots that have been designed for younger users safe, or are these minors making use of untested systems that could mislead them, manipulate them or mishandle their data? The F.T.C. is signaling it wants specifics, not slogans, on how companies are reducing harm.

What the F.T.C. is seeking
Investigators are pushing companies on how they design chatbot behavior with minors in mind — especially when the bots function as “companions” that engage in open-ended, intimate conversations. Questions includes age gating and parental controls; default settings; ways content moderation is managed on the service, as well as whether products are designed to minimise exposure to bullying, sexual content, self-harm prompts or exploitation.
The agency also seeks insight into how developers verify the safety of their apps before and after launch through measures like red-team testing, adjustments for sensitive content and guardrails that prevent jailbreaking. Another priority: transparency. Firms are being questioned about how they educate parents and young users about risks, limitations and the provenance of chatbot responses.
Data practices come directly into scope. The F.T.C. is investigating what information is taken from children, how long it is kept, whether it is used to train models and keep in touch with minors — and what safeguards the industry has against re-identification. That scrutiny extends to APIs powering AI chat programs inside games, education apps and social platforms popular with minors.
The legal backdrop: COPPA and beyond
COPPA prohibits companies from collecting personal information of children under the age of 13 without verifiable parental consent and requires privacy-by-design principles such as notice, disclosure and data minimization. The FTC enforces COPPA and has expressed interest in updating the rule to accommodate data flows and persistent identifiers in the age of AI.
Enforcement has been robust throughout the kids’ tech ecosystem at large. Epic Games settled C.O.P.P.A. charges related to Fortnite for a $275 million civil penalty, Amazon agreed to pay $25 million over Alexa recordings of children and Microsoft coughed up $20 million in dealing with Xbox sign-ups for children. The new AI investigation shows that the commission is ready to apply similar rigor to conversational systems.
Beyond COPPA, the FTC can also act against unfair or deceitful practices under Section 5 of the FTC Act—such as misleading claims about safety features, dark patterns that brink on over-lymphatic sharing, or a fail and to address foreseetable harms. That gives regulators the space to act even when products are not explicitly labelled for children but are commonly used by them.
Why A.I. companions pose fresh challenges
1Generative AI models are probabilistic and might ‘hallucinate’ — generating false or unsafe advice with high confidence. When chatbots take on caring or always-on personas, young users — already in the habit of asking Google everything and ever more accustomed to speaking their thoughts out loud thanks to virtual assistants built into smartphones — could attach themselves and follow advice that seems credible but isn’t rooted in expertise or evidence.
Researchers and standards committees have warned of the need for caution. The NIST AI Risk Management Framework emphasizes the need for context-based testing and continuous monitoring. UNICEF’s Policy Guidance on AI for Children promotes privacy-by-design, age-appropriate disclosures and important safeguards for children who may be particularly vulnerable.
There is also the risk of third-party ecosystem exposure. A bot that maintains a semblance of responsibility on the company’s main app doesn’t necessarily behave in the same way once it’s embedded into an edtech tool, a roleplay game, or some obscure companion app with shadier safety layers. This fragmentation is acknowledged by the FTC’s emphasis on developer policies and downstream enforcement.
What platforms might need to change
Look for pressure for more rigorous age verification, not just self-attestation. Companies could be encouraged to enable parental controls by default, shorten data retention periods for minors, and turn off certain features like unfiltered image search results generation or sharing a location or engaging in suggestive roleplay when a user is under 18.
Lives could be on the lineInjury assessments might get tougher and more public. That involves testing models alongside “child persona” prompts, sensing rates of unsafe response and charting how rapidly systems identify and block jailbreaks. Independent audits and incident reporting — which are the norm in other high-risk industries — could become a de facto standard of credibility.
Training pipelines may come under scrutiny.
Firms might have to show that kids’ data is not being used to train models unless parents give permission, and that synthetic or public datasets are scrubbed of any age-suspect content. Clear, straightforward notices to the parents will be crucial.
What it means for schools, parents, developers
Schools that have been quick to embrace AI tutors will now encounter more rigorous due diligence. Procurement officials should request documentation of model testing, data flows and the ability to disable risky features. Matching the FTC’s expectations—and student privacy laws like FERPA at the district level—will become a competitive asset for edtech vendors.
Parents: The inquiry is a reminder that A.I. chatbots are like any powerful media tool. To treat them as you would a smartphone or video game, tighten the controls, review chat histories if possible and establish norms around what questions can be asked of a bot.
Transparency dashboards and safety labels, if required or widely adopted, could help make decisions like these easier.
The message to developers is clear. Safeguards claims need to be evidence-based, privacy practices should adhere to both the letter and spirit of COPPA, and systems need to be built with children’s rights in mind from the start. The age of “ship now, fix later” is crashing into child protection law — and the FTC seems prepared to explore where the line lies.