Pinterest chief executive Bill Ready is urging governments to prohibit social media for users under 16, arguing that platforms have not been designed with young people’s well-being in mind and that incremental fixes have failed. His call, made in a widely discussed op-ed, places him among the few Big Tech leaders openly advocating for a legal age floor that goes beyond current norms.
The stance lands as lawmakers around the world test new limits on minors’ online access and intensify pressure on platforms to verify ages. It also raises practical questions: how to enforce such a ban without over-collecting data, and whether a patchwork of national rules can realistically govern global networks.
Why Ready Is Pushing For An Under-16 Line
Ready argues that handing adolescents unfettered access to engagement-driven feeds amounts to a public-health gamble. U.S. health authorities have voiced similar concerns: the U.S. Surgeon General has warned that social media can pose a “profound risk” to youth mental health, while the American Psychological Association urges age-appropriate guardrails and active parental oversight.
Data trends underline the worry. The Centers for Disease Control and Prevention reported that 57% of U.S. teen girls experienced persistent feelings of sadness or hopelessness, a sharp rise over the past decade. Pew Research Center finds that about 95% of teens use social platforms, and roughly one-third say they are online “almost constantly.” Researchers continue to debate causality, but the correlation between heavy use and anxiety, sleep disruption, and attention difficulties appears consistent across studies.
Ready’s comparison to regulated industries is pointed: alcohol and tobacco have long enforced age thresholds and marketing limits because the risks to young people are well-documented. He suggests social platforms should face similar bright lines until they can prove youth-safe design at scale.
Where Governments Are Moving On Youth Social Media Access
Several governments have begun to tighten access. Australia has advanced measures to block under-16s from joining social platforms and has trialed age-assurance technologies through its eSafety framework. France has approved a national restriction for users under 15. Lawmakers in Germany’s ruling coalition have voiced support for variants of a youth ban, and officials in Malaysia, Spain, and Indonesia have outlined plans to curb minors’ access.
In the U.S., multiple states have proposed or passed laws limiting minors’ social media use, often requiring parental consent and new identity checks. Some measures face court challenges from civil liberties groups and industry trade associations, which argue that sweeping age limits risk infringing free speech and forcing invasive data collection. The legal trajectory remains unsettled.
Beyond national efforts, regulators are tightening platform obligations. The United Kingdom’s Age-Appropriate Design Code sets a baseline for treating minors’ data with heightened protections, while the European Union’s Digital Services Act compels large platforms to assess and mitigate systemic risks to minors. An enforceable under-16 rule would raise the bar further by closing off access altogether.
The Enforcement Puzzle For Under-16 Social Media Bans
Any ban hinges on robust, privacy-preserving age verification. The options are imperfect. Government ID checks can be effective but introduce data security and inclusion concerns. Credit bureau lookups and mobile carrier checks exclude many teens and raise accuracy issues. AI-based facial age estimation avoids storing identity documents but prompts civil liberties questions and can reflect demographic bias if not independently audited.
Policy experts increasingly recommend a layered approach: device-level parental controls by default, platform-based age estimation with third-party certification, and strict limits on data retention and reuse. Standards bodies and regulators, from NIST to Europe’s data protection authorities, are working toward benchmarks that could give companies a defensible playbook.
Pinterest’s Safety Playbook And Youth Protections
Ready points to Pinterest’s own model to argue that youth protections need not sink growth. The company restricts social features for accounts under 16, emphasizing inspiration and search over public interaction. It also deploys “compassionate search” interventions that surface mental health resources when users query self-harm content, and it banned weight-loss ads to reduce harmful pressure on teens.
Pinterest has told investors that Gen Z is its fastest-growing cohort even with those limits, suggesting that a product not centered on social validation can still win younger users. Whether engagement-heavy platforms could replicate that dynamic without gutting their business models is an open question that lies at the heart of the policy debate.
Debate Inside Tech And What Comes Next For Policy
Industry reaction is split. Some child-safety advocates and pediatric organizations back firm age thresholds, arguing platforms have had years to self-regulate. Digital rights groups warn that bans could drive teens to unsupervised corners of the internet and entrench surveillance infrastructure. Even supporters concede that carve-outs for education, messaging within families, and crisis resources must be handled with care.
Ready’s intervention raises the political stakes by signaling that at least one major platform is prepared to absorb near-term friction in favor of a hard rule. The next phase will likely be technical: governments selecting approved age-assurance methods, defining penalties, and coordinating cross-border enforcement so that rules are not trivially circumvented.
The core tension remains unresolved: society wants the benefits of connection and creativity for young people without the harms of algorithmic amplification and social pressure. By calling for a straightforward under-16 bar, Pinterest’s CEO is betting that a bright line, even if blunt, will move the industry faster than another round of voluntary codes and partial fixes.