FindArticles FindArticles
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
FindArticlesFindArticles
Font ResizerAa
Search
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
Follow US
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
FindArticles © 2025. All Rights Reserved.
FindArticles > News > Science & Health

AI-powered chatbots for teen mental health ‘unsafe,’ say experts

Pam Belluck
Last updated: November 20, 2025 11:12 am
By Pam Belluck
Science & Health
6 Min Read
SHARE

Four of the world’s best-known artificial intelligence chatbots have been tested for empathy in a new study published by the Journal of Medical Internet Research Mental Health.

The results: It isn’t looking good for people who need help managing mental health challenges, like depression and anxiety — these types of mental situations are common among young teens.

Table of Contents
  • Bots can’t identify key red flags, say experts
  • Spotty safety and conversation shifts over time
  • Strong demand, high risk as teen use increases
  • What safer design would entail for teen users
  • Guidance for families right now on teen chatbots
AI-powered chatbot on smartphone deemed unsafe for teen mental health by experts

The researchers, from Common Sense Media and Stanford Medicine’s Brainstorm Lab, claim these tools often missed warning signs of suicidal thoughts online, provided misleading counsel and suddenly changed roles mid-chat in ways that might endanger at-risk youth.

The analysis included popular platforms used by teens for work as well as guidance. After systematic testing, the authors recommend that developers disable mental health features for young users until underlying safety issues are addressed and independently evaluated.

Bots can’t identify key red flags, say experts

Testers said chatbots frequently missed signs of potential psychosis, disordered eating and trauma. In another example, a user reported making a “personal crystal ball” (classic delusional content), and the bot gushed alongside rather than flagging or suggesting professional evaluation.

In a second example, a user reported imagining a relationship with a celebrity in combination with paranoid ideation and sound experiences. The system presented the case as just another ordinary breakup, identified generalized coping strategies, and failed to screen for psychosis or prioritize urgent care.

Faced with references to bulimia, chatbots sometimes expressed recognition of the danger but were easily diverted by innocent explanations. In various threads, they had diagnosed serious mental health conditions as digestive complaints, overlooking established red flags in their clinical presentation.

Spotty safety and conversation shifts over time

Experts observed mild improvement in how some systems respond to explicitly mentioned suicide or self-harm, especially in short exchanges. But safety declined over extended conversations, as the models became too casual, flipped into “supportive friend” mode, or reversed earlier cautions — an effect researchers call conversation drift.

That inconsistency matters. At the same time, adolescents’ chat is often peppered with long conversational tangents — and, having read more context, the chatbot is more likely to miss a pivot in the conversation or provide false assurance. Age gates and parental controls were inconsistent too, with weak verification and haphazard enforcement from platform to platform.

A young boy smiling while looking at a smartphone, with a chatbot icon and speech bubbles overlaid on the image.

Legal scrutiny is rising. Lawsuits in recent months have claimed that interactions with chatbots led to self-injury, even as companies stress that their systems are not a replacement for therapy and they train them to steer users to crisis resources. Researchers argue that disclaimers do not cancel risks of convincing but medically unsound guidance.

Strong demand, high risk as teen use increases

The warnings come as part of a youth mental health crisis. Thirty percent of teen girls have seriously considered suicide, according to the C.D.C.’s Youth Risk Behavior Survey, and more common symptoms of depression and anxiety are still rising. The Surgeon General has urged immediate action to safeguard the mental health of adolescents in digital spaces.

Meanwhile, teenagers are trying out chatbots for company, advice and a sense of anonymity they might not find away from the keyboard. Because these systems are great at school help and creative tasks, families may assume they’re trustworthy when it comes to sensitive health topics. The report warns that fluency and expertise can be confused.

What safer design would entail for teen users

For minors, mental health use cases should be paused while key safeguards are re-engineered, researchers note. Top on the list of things that need fixing are reliable detection of psychosis, eating disorders, PTSD and ADHD; durable safety behavior covering extended chats; guardrails against “role confusion” — toggling between clinician, coach and friend.

They also recommend enhanced age verification, clear scope-of-use messaging and automatic escalation to live resources when risk criteria are satisfied. Independent audits, red-teaming with child psychiatrists and clear public reporting on failure rates would help restore trust. Any wellness content, too, should be created with clinical oversight and remain focused on reporting improvements in harm reduction and not just end-user satisfaction.

Guidance for families right now on teen chatbots

Until such protections are demonstrated, the experts recommend that parents and caregivers communicate with their teens about the limitations of AI: chatbots might be useful for homework or brainstorming ideas, but they aren’t therapists. Urge young people to consider bot advice as unconfirmed information, not a diagnosis or plan.

Establish clear boundaries around when and why to use it; monitor for late-night or compulsive use, and keep avenues open for real-time support from trusted adults. If a teen is concerned about self-harm, suicidal thoughts, extreme distress or disordered eating, seek professional help right away and use crisis services if they are available.

There is real potential for AI to make meaningful contributions in health care, the authors say, but not now specifically through chatbots designed to support teen mental health. Without thorough redesign and transparent validation, positioning them as trustworthy guides risks vulnerable young people becoming de facto test subjects.

Pam Belluck
ByPam Belluck
Pam Belluck is a seasoned health and science journalist whose work explores the impact of medicine, policy, and innovation on individuals and society. She has reported extensively on topics like reproductive health, long-term illness, brain science, and public health, with a focus on both complex medical developments and human-centered narratives. Her writing bridges investigative depth with accessible storytelling, often covering issues at the intersection of science, ethics, and personal experience. Pam continues to examine the evolving challenges in health and medicine across global and local contexts.
Latest News
Wikipedia Guide Tops List of AI-writing-spot Sharpeners
Nothing Headphone 1 Receives Rare Black Friday Discount
OnePlus Pad 3 Slashes Price to an All-Time Low at Just $579.99
Google Releases Gemini 3 on Desktop and Android
Updated: Carriers expand eSIM support as physical SIMs fade
Curastory Founder Steps Down Following SEC Settlement
Google Easter Egg Channels Pluribus Hive Mind
Oura Ring 4 hits a record-low $249 price in a rare discount
Spotify Unveils Native Playlist Transfers
TPG Backs 50% of TCS $2B AI Data Center Plan
Google Publishes Nano Banana Pro Image Model
Wispr Raises $25M From Notable Capital As App Surges
FindArticles
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
  • Corrections Policy
  • Diversity & Inclusion Statement
  • Diversity in Our Team
  • Editorial Guidelines
  • Feedback & Editorial Contact Policy
FindArticles © 2025. All Rights Reserved.