Common Sense Media has issued a scathing assessment of XAI’s Grok chatbot, warning that the service poses “unacceptable risks” to teens due to weak age protections, porous safeguards, and features that make harmful content easy to share at scale on X. The nonprofit’s review found that Grok’s Kids Mode and age checks failed to meaningfully restrict access to explicit or dangerous material for under-18 users.
The group evaluated Grok across its website, mobile app, and X integration, including text, voice, and Kids Mode. Their conclusion: the product’s design and enforcement leave minors exposed to explicit sexual content, biased and violent responses, and step-by-step explanations of risky behaviors—all while a single tap can broadcast outputs to millions on X.
What the safety review found about Grok’s teen risks
Investigators said Grok did not reliably detect teen users, even when account profiles clearly indicated an age of 14. In practice, the system reportedly treated the teen account like an adult, serving content and responses without additional friction. X has acknowledged it conducts age checks where legally required, such as in the UK, Ireland, and the EU, but Common Sense Media argues that limited compliance leaves large gaps for minors elsewhere.
Kids Mode—the feature designed to tailor experiences for younger users—allowed sexually explicit and sexually violent language to slip through, the report said. Reviewers also noted that Grok could facilitate “dangerous ideas” and provide detailed answers that undercut harm-reduction goals. The nonprofit called out erotic “AI companion” use cases as especially concerning for adolescents seeking advice or companionship.
Grok has already faced criticism for sexualized image generation. After public backlash over the creation of sexualized images from user photos, X first limited the feature to paid users and later banned prompts requesting images of real people “in revealing clothing, such as bikinis.” Common Sense Media says such policy swings highlight a reactive approach rather than a teen-first, safety-by-design strategy.
How Grok compares to other AI bots for teen safety
While the nonprofit has flagged most general-purpose chatbots as “High Risk” for minors, it singled out Grok as among the worst it has tested. By contrast, the educational assistant Khanmigo in the Khan Academy Kids app was rated low risk, underscoring that safer-by-design models are achievable when products prioritize narrow, age-appropriate use cases and rigorous guardrails.
Industry peers have started pushing stricter teen protections. Meta, for example, has restricted teen access to some AI characters and features. Others have leaned on teen-specific policies, escalation flows, and default filters. Common Sense Media’s assessment suggests Grok’s identity as an edgy, real-time assistant closely tied to X’s social graph complicates the adoption of conservative defaults that are typically necessary for younger audiences.
Age assurance and the policy backdrop shaping safeguards
The findings land as regulators scrutinize how platforms protect minors. The EU’s Digital Services Act requires very large platforms to assess and mitigate risks to children. The UK’s Online Safety Act emphasizes age assurance and safer defaults. In the US, COPPA governs data collection for children under 13, and multiple states are weighing or implementing youth safety laws that push for stronger age verification and guardrails.
The stakes are high. Pew Research Center has reported that roughly 46% of US teens say they are online “almost constantly,” making even sporadic safety failures consequential. When an AI tool can both generate harmful material and amplify it swiftly through a social network, small lapses in age gating or content classification can spiral into large-scale exposure.
xAI’s challenge and potential fixes to protect teen users
xAI and X are trying to balance Grok’s freewheeling persona with the compliance, safety, and reputational expectations that come with reaching younger users. The service’s near-real-time pipeline to X content is a competitive differentiator, but it also introduces a higher likelihood of unfiltered or fast-spreading harmful outputs. Common Sense Media’s critique implies that Grok’s current architecture and incentives—openness, speed, virality—conflict with best practices for teen protection.
Safety experts routinely advocate for several measures: robust age assurance beyond self-declared birthdays, teen-only experiences that default to conservative settings, strong blocklists for sexual and violent material, transparent model cards and red-team audits, and predictable, human-reviewed escalation paths for sensitive topics. Limiting one-click sharing from youth accounts and rate-limiting risky queries can further reduce harm.
Bottom line on Grok’s teen safety risks and needed changes
Common Sense Media’s verdict puts fresh pressure on xAI to rethink Grok’s youth safeguards and on X to align distribution features with child safety goals. With regulators circling and peers tightening teen protections, the question is not whether general-purpose chatbots need stricter defaults for minors—it’s how quickly platforms like Grok can deliver them, and whether the product’s design will put teen safety ahead of virality.