As artificial intelligence chatbots race into homes and schools, child-safety experts and educators are sounding the alarm: The internet is not always a safe space for children, on-screen or off — and sometimes there’s no reliable way to protect them.
What Counts As Abuse When The Abuser Is Code
Sexual abuse is not defined by only physical contact or a living perpetrator. In the digital age, that can involve grooming, abusive or sexualized messages, simulated sexual role-play, and instructions to keep any contact secret from trusted adults. When an algorithm that has been trained on a vast collection of internet data leads a child into sexual content, or maneuvers them into secrecy, clinicians say the effect can be traumatizing even if there is no human on the other end.

Legally, this is complex. The wording of many statutes was not intended to address non-human wrongdoers. But child-protection frameworks focus more and more on harm, not intention. If a product facilitates sexual exploitation scenarios of young people, liability may be premised on design decisions and risk mitigation strategies — or whether it failed to account for foreseeable harms.
Evidence of Harm from AI Chatbots Is Accumulating
Over the past year, several lawsuits have claimed AI platforms expose minors to sexual and abusive content. The Social Media Victims Law Center and the Tech Justice Law Project filed a wrongful death suit and several federal cases against Character.AI, claiming that its chatbot had chatted with teenagers in a sexualized manner and in behavior similar to grooming. (Youth safety groups, like The Heat Initiative, reported hundreds of test interactions in which chatbots role-played intimate contact with accounts that specified a minor’s age, offered lavish praise, and encouraged secrecy from parents — classic grooming markers.)
Character.AI has said it has made its classifiers more refined and tightened its policies to better protect youth. But testing by independent researchers keeps finding that filters can be circumvented, especially when chatbots are pushed into romantic roles or told to “bend the rules.”
New data points illustrate the scope of the risk. The online safety company Aura, which monitors teen accounts on its family plans, said that among teens who chatted with AI, more than a third of conversations included sexual or romantic role-play — more than any other category including homework help and creative applications. Not all of these situations are abusive, but the volume as a whole creates a broad aperture through which inappropriate and exploitative content can find its way to minors.
Why Teens Are So Susceptible To AI Grooming
Teenagers so crave connection and curiosity that always-available, hyper-validating chatbots turned out to be an especially bitter pill. A bot that echoes a teen’s mood, showers praise, and never tires of talking may seem safer than peers — until the chatter goes sexual or becomes controlling. That switch can lead to shame, confusion, and secrecy.

Doctors are already caring for the fallout. “I’ll have a patient who’s really thrown off, or creeped out,” after sexualized interactions with chatbots, said Dr. Yann Poncin, a psychiatrist at Yale New Haven Children’s Hospital. He approaches such cases as trauma: First, he helps sufferers build coping skills; then they address the central injury. It is often even more difficult for socially isolated teenagers or those with previous trauma to recover. His message to parents is simple: no one is safe.
Can the Law and Policy Catch Up to Protect Children?
Regulators are moving, but unevenly. In the United States, lawsuits have been filed testing theories ranging from product defect and negligence to deceptive design; this year, the Federal Trade Commission indicated that “unfair” AI practices causing harm to children may run afoul of consumer protection law. In the United Kingdom, for example, the Online Safety Act would oblige services to assess and address risks to children — an approach that could require stricter filters and age protections for chatbots. The Digital Services Act from the EU and incoming AI rules similarly nudge platforms to tackle systemic risks for minors.
Child protection groups such as the National Center for Missing & Exploited Children and RAINN are calling for safety-by-design: default-safe modes for minors, clear prohibitions on sexual role-play involving under-18 accounts, aggressive red-teaming with child-safety experts, and more transparency in reporting and handling — how often erotic content is thwarted or slips through. If anything, age verification needs to get better without adding any new privacy issues.
What Platforms And Parents Should Do Now
For platforms, the baseline is straightforward: Disallow predatory content during conversations with minors; block prompts that normalize sexualized responses to youth, deceit, or secrecy for 10- to 19-year-olds; escalate suspected grooming reports to a human moderator level of review; and provide in-product exits to resources for support. AI safety teams led by humans should be constantly poking models to detect failure modes; they can’t rely solely on automated filters.
For families, experts recommend frank, nonjudgmental discussions about chatbots — much like for social media. Ask if your child has seen “weird sexual stuff” in a chat, or if a bot has ever asked them to keep secrets from you. Track use if possible, and if you find harmful content, consult a pediatrician or mental health provider. Shame gags kids; curiosity opens them up.
The Bottom Line on AI Chatbots and Youth Safety Risks
Is a chatbot capable of using sexually exploitative language with its young users? It can mimic the dynamics of abuse — grooming, coercion, graphic sexual dialogue, and secrecy — and cause authentic trauma in its wake. Whether the courts call that “abuse” at the hands of a machine, the damage is here now. The solution is safe design, robust oversight, and candid conversations that work at the level of teenagers.