FindArticles FindArticles
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
FindArticlesFindArticles
Font ResizerAa
Search
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
Follow US
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
FindArticles © 2025. All Rights Reserved.
FindArticles > News > Science & Health

12% of US Teens Seek Emotional Support From AI

Pam Belluck
Last updated: February 25, 2026 5:02 pm
By Pam Belluck
Science & Health
6 Min Read
SHARE

A growing slice of American adolescence now includes confiding in chatbots. A new Pew Research Center survey finds that 12% of U.S. teens use AI for emotional support or advice, a notable shift that nudges artificial intelligence from homework helper into the role of confidant. While most teen interactions with AI still revolve around search and schoolwork, this quieter trend is raising urgent questions for parents, clinicians, and platform makers.

Key Findings From the Latest Pew Research Center Survey on Teens

Pew reports that teens’ top reasons for using AI are to find information (57%) and get help with school (54%). Yet the social uses are no longer fringe. Sixteen percent say they chat with AI for casual conversation, and 12% turn to it for emotional support or advice—situations that traditionally involve friends, parents, teachers, or counselors.

Table of Contents
  • Key Findings From the Latest Pew Research Center Survey on Teens
  • Why Teens Turn to Bots for Support and Advice
  • What Parents and Clinicians Are Saying About Teens and AI
  • Safety and Product Design Questions Facing AI Platforms
  • What Comes Next for Teen Mental Health and AI Use
A white triple chevron icon pointing upwards, centered on a professional light blue and grey background with subtle geometric patterns and soft gradients.

There is also a perception gap at home. Sixty-four percent of teens say they use AI chatbots, but only 51% of parents think their teen does. Parents are broadly comfortable with academic and informational uses (79% approve of search, 58% of schoolwork), yet support drops sharply for social or emotional uses: only 28% are okay with casual chats and just 18% are comfortable with AI for support or advice. A majority—58%—say they are not okay with their child turning to AI for emotional help.

As for the long view, teens are split on AI’s societal impact. About 31% predict a positive effect over the next two decades, while 26% expect it to be negative, reflecting ambivalence about how far the technology should reach into daily life.

Why Teens Turn to Bots for Support and Advice

Teens describe AI as available, unflinching, and nonjudgmental—qualities that can feel scarce when emotions are high and privacy is paramount. In practice, many use chatbots for low-stakes “venting,” quick reframing of anxious thoughts, or pressure-free rehearsal of hard conversations. A teen might ask an AI how to apologize to a friend, manage test stress, or navigate a family disagreement, then iterate until the advice feels right.

That approach mirrors how young people already use search and social platforms for everyday dilemmas, but with a key difference: conversational agents simulate empathy and coherence, which can create an illusion of understanding. The American Psychological Association has cautioned that while digital tools can supplement care, they are not substitutes for professional evaluation, particularly when risks like self-harm, abuse, or severe depression may be present.

12% of US teens seek emotional support from AI chatbots

What Parents and Clinicians Are Saying About Teens and AI

Mental health professionals note potential upsides when chatbots steer users toward healthy coping and crisis resources. The World Health Organization has emphasized the promise of AI in health while urging rigorous safeguards, transparency about limitations, and human oversight—standards that general-purpose chatbots were not built to meet.

Parents, meanwhile, face a practical challenge: teens are experimenting with tools that can be helpful in one moment and misleading in the next. False certainty, generic advice, or emotionally suggestive responses can backfire. Experts recommend family conversations that set boundaries—what topics are appropriate for AI, what triggers a switch to a trusted adult, and how to verify guidance. Consistent check-ins often matter more than technical controls.

Safety and Product Design Questions Facing AI Platforms

General-purpose models were not designed as therapists, yet they increasingly occupy that space. That mismatch puts pressure on companies to harden guardrails: clearer disclaimers, age-aware experiences, safer default behaviors for sensitive topics, and proactive pointers to professional help. The Federal Trade Commission has warned developers against overstating what AI can do and urged rigorous testing to prevent foreseeable harms, especially for minors.

Schools and youth organizations are also entering the conversation. Some districts now fold AI literacy into digital citizenship curricula, teaching students to evaluate model outputs, recognize limitations, and understand privacy trade-offs. Common Sense Media has similarly advised that families treat AI guidance like any unvetted source—useful for brainstorming, never definitive for high-stakes decisions.

What Comes Next for Teen Mental Health and AI Use

The 12% figure is not just a data point; it is a signal that AI has crossed into the intimate spaces of adolescence. The next phase will likely blend product changes—safer defaults, better escalation paths—with cultural shifts that normalize asking for help from people, not just machines. For now, the healthiest approach may be pragmatic: encourage teens to use AI as a starting point, keep conversations open at home and at school, and make sure clear on-ramps to human support are always within reach.

Pam Belluck
ByPam Belluck
Pam Belluck is a seasoned health and science journalist whose work explores the impact of medicine, policy, and innovation on individuals and society. She has reported extensively on topics like reproductive health, long-term illness, brain science, and public health, with a focus on both complex medical developments and human-centered narratives. Her writing bridges investigative depth with accessible storytelling, often covering issues at the intersection of science, ethics, and personal experience. Pam continues to examine the evolving challenges in health and medicine across global and local contexts.
Latest News
Audible launches $9 Standard plan for casual listeners
YouTube Rolls Out Artist Stations For Nonstop Listening
Android 16 QPR3 Adds Bouncy Folder Animation
Lenovo Demos Yoga Book Pro 3D Concept Laptop
Google Pixel 10 Discount Slashes Price To $599
Claude Code introduces Voice Mode for hands-free coding
OpenAI Unveils GPT-5.3 Instant Ending Calm Down Replies
Anduril Targets $60 Billion Valuation In New Round
Samsung Health Integrates Medication Coupons In Wallet
Galaxy S26 Ultra Privacy Display Triggers Quality Concerns
Google Rolls Out Find Hub Luggage Location Sharing
Android 16 QPR3 Reaches Pixel Devices: Full List
FindArticles
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
  • Corrections Policy
  • Diversity & Inclusion Statement
  • Diversity in Our Team
  • Editorial Guidelines
  • Feedback & Editorial Contact Policy
FindArticles © 2025. All Rights Reserved.