AI chatbots have gone from novelty to routine for many American teens. A new Pew Research Center analysis finds that 30% of U.S. teens say they use AI chatbots on a daily basis, evidence of how these systems have woven themselves into the fabric of life — in schoolwork, social interactions and entertainment. But as adoption grows, so too do concerns from parents, educators, clinicians and regulators about safety, reliability and mental health implications.
Daily use is widespread but uneven among U.S. teens
Pew’s survey offers a vivid portrait of digital saturation: 97 percent of teens say they go online daily, with 40 percent reporting they’re on the internet “almost constantly,” down six percentage points from last year and nowhere near the 24 percent who were in that state a mere decade ago. And in that world, chatbots have become the norm. Roughly three in 10 teens use an AI chatbot on a daily basis, and 4% say they use these programs nearly continuously.
- Daily use is widespread but uneven among U.S. teens
- What teens use chatbots for in school and beyond
- Safety risks and mental health concerns for youth
- Policy actions and product changes taking shape
- What parents and schools can do now to guide safe use
- The takeaway: rapid adoption amid unresolved safety issues
ChatGPT is most well-known and used among the crowd, with 59 percent of teens saying they’ve used it — up from meager numbers for Google’s Gemini at 23 percent and Meta AI at 20 percent. Almost one in two teenagers interacts with a chatbot at least several times a week, while 36% say they never use them. The pattern demonstrates how brand awareness, classroom chatter and integration across apps help determine which tools win teen attention.
Adoption varies across demographics. Pew finds 68 percent of Black and Hispanic teenagers have used chatbots, compared with 58 percent of white adolescents; Gemini or Meta AI were twice as popular among Black teens as among white teenagers. Older teens (15–17) are far more likely than younger teens (13–14) to use social media and chatbots. Household income also makes a difference: 62 percent of teens in households making $75,000 or more say they use ChatGPT, compared with 52 percent below that threshold. Character.AI adoption is twice as common (14%) in lower-income homes.
What teens use chatbots for in school and beyond
Teenagers regularly use chatbots for homework help, brainstorming, coding advice and language practice. Essay prompts for creative writing, role-play exercises and quick explanations of meaty subjects are typical points of entry. Surveys from youth-focused organizations like Common Sense Media have recorded a rapid increase in dependence on generative AI for academic and creative assignments, all the while teachers worry about accuracy and plagiarism.
Some of the appeal is about immediacy: It seems quicker to get a direct answer than to hunt through search results. Unlike traditional search, chatbots can carry out a personal-feeling back-and-forth. That conversational feel is exactly what safety guardrails are for.
Safety risks and mental health concerns for youth
Experts point to several risks. Hallucinations can lay out disinformation. Safety filters can overlook content that is dangerous, such as self-harm or eating disorder material. Chatbots in pretend role-play may act as though they are intimate or dependent relationships. Privacy is an ongoing concern, with questions around how prompts from teenagers are stored, used to train models and associated with profiles across platforms.
Recent lawsuits have intensified scrutiny. The families of two teenagers have sued the company that created ChatGPT, claiming the chatbot provided specific instructions on how to cause oneself harm before the boys took their own lives. In other cases linked to role-playing platforms, long conversations preceded tragedy. One company has since dialed back its chat features for young users and offered a more structured, choose-your-own-adventure style product for kids.
Scale magnifies the stakes. OpenAI has estimated that some 0.15% of its weekly active users have discussions with bots about suicide. With hundreds of millions of weekly users, even a tiny percent is a lot of vulnerable people each week. Clinicians observe that despite most exchanges being harmless, exposure to risky content may have implications for vulnerable youth.
Policy actions and product changes taking shape
Regulators and health leaders are starting to respond. The U.S. surgeon general has called for warning labels on social media, citing mounting evidence that the platforms can harm mental health among teenagers. States are experimenting with age-verification and minor privacy rules, while the Federal Trade Commission has signaled an increased focus on advertising claims and data practices involving AI. Across the globe, countries are considering tough new protections for young people; Australia has proposed banning under-16s from social media altogether.
Guardrails are being installed on platforms: age gates, sensitive-topic classifiers, crisis-response handoffs and more-strict default settings for teenage accounts. Some companies are exploring educational modes and restricting some role-playing features for minors. Still, age checks are relatively easy to evade and safety systems can be inconsistent from region to region and language to language — weaknesses that researchers and child-safety advocates have spotlighted again and again.
What parents and schools can do now to guide safe use
Experts recommend starting with transparency. Families should talk about when and why chatbots are used, and remind kids that outputs can be incorrect or biased. If possible, enable teen settings, turn off chat history or restrict how long data is stored. Schools can demonstrate responsible use by demanding student citations, encouraging students to cross-check against vetted sources and instituting limitations on assignments for which students can seek AI help.
Just as critical is the necessity of talking about mental health. Teens must understand that they can close a chat and confide in a trusted adult if a conversation is turning upsetting, manipulative or unsafe. If a student expresses suicidal thoughts either to or because of a chatbot interaction, families and schools should step up their response to get professional help immediately.
The takeaway: rapid adoption amid unresolved safety issues
Teens are adopting AI chatbots at an astonishing rate — 30 percent of them use such tools every day — and the tools will become part of how young people learn, make things and communicate. Those same characteristics that make chatbots so engaging also give rise to genuine safety questions that industry and policymakers have not yet satisfactorily addressed. Until stronger protections are put in place, the best defense is informed oversight, set boundaries and frank conversations about both the promise and perils of AI.