Pinterest chief executive Bill Ready is urging lawmakers to bar children under 16 from social media, arguing the industry has failed to protect young users and that guardrails should now be set by government, not platforms. His call, published in a Time opinion piece, lands as Australia moves to restrict under-16s from social networks and other governments weigh similar steps.
Ready frames the status quo as a risky experiment: engagement-driven feeds reward content that keeps kids staring at screens, while safety features often lag. He contends the incentives are misaligned and likens the dynamic to past public health battles where regulation forced change.
- Why a CEO Wants Lawmakers to Step In on Youth Safety
- What Pinterest Changed for Teens on Its Platform
- The Ban Debate and Global Momentum on Age Limits for Social Media
- Age Verification Trade-Offs and Privacy Concerns
- What the Evidence Says About Youth and Social Media
- What Comes Next in the Push to Regulate Teen Social Media
Why a CEO Wants Lawmakers to Step In on Youth Safety
Ready’s argument is fundamentally about incentives. Ad-supported platforms optimize for time and clicks, not child well-being. He says that leaves kids vulnerable to adult strangers, compulsive use, and content that undermines attention and mood—issues spotlighted in the U.S. Surgeon General’s 2023 advisory, which warned of “profound risk” to youth mental health from certain patterns of social media use.
Public health data adds urgency. The Centers for Disease Control and Prevention reported record levels of persistent sadness and hopelessness among U.S. high schoolers in recent surveys, with especially steep increases among girls. While researchers continue to debate causation versus correlation, the trendline is unmistakable, and policymakers are responding.
What Pinterest Changed for Teens on Its Platform
Pinterest still allows sign-ups from age 13 to comply with U.S. law, but the company stripped “social” out of the experience for younger users. Accounts for people under 16 are private by default, undiscoverable, and walled off from strangers—no unsolicited messages, comments, or likes. The company positions Pinterest more as a visual search and inspiration tool than a chat-driven network for teens.
Crucially, Ready says those changes didn’t hurt growth with younger audiences. According to the company, Gen Z now accounts for over 50% of users, a data point he cites to argue that safer defaults build trust rather than push teens away.
The Ban Debate and Global Momentum on Age Limits for Social Media
Ready’s stance arrives as governments test stricter age limits. Australia has advanced plans to restrict under-16s’ access to social media, while U.S. states have tried to require parental consent or age checks; courts have paused some laws amid First Amendment and privacy concerns. The United Kingdom’s Online Safety Act and the European Union’s Digital Services Act already impose heightened duties to protect minors online.
In the U.S., Ready backs the proposed App Store Accountability Act, which would shift age verification upstream to app marketplaces. The premise: It’s easier to enforce rules at the gate than across millions of apps. This approach mirrors proposals from child-safety advocates who argue that app stores and operating systems are well-placed to verify age and enable parental controls consistently.
Age Verification Trade-Offs and Privacy Concerns
Any under-16 ban hinges on reliable age checks. Options include government ID, credit database queries, or AI-based facial age estimation. Each raises issues. Civil liberties groups like the Electronic Frontier Foundation and the ACLU warn that mandatory ID collection could chill speech, exclude families without ready documentation, and create sensitive honeypots of youth data. Biometric estimation reduces identity risk but introduces accuracy and bias questions.
There’s also an equity problem: strict age-gating could push younger teens from mainstream platforms with moderation into unregulated corners of the internet. That tension—safer spaces versus potential displacement—sits at the heart of the policy fight.
What the Evidence Says About Youth and Social Media
Research is nuanced. Large-scale studies, including work by the Oxford Internet Institute, often find small average links between social media use and well-being, but bigger risks for certain groups, especially heavy users or those already struggling. The National Academies have called for more longitudinal research and access to platform data to clarify cause and effect. Meanwhile, surveys from Pew Research Center show near-ubiquitous teen exposure—YouTube reaches the vast majority of U.S. teens, with strong adoption of TikTok, Instagram, and Snapchat—making any policy shift consequential at national scale.
Platforms have rolled out mitigations: TikTok set default daily limits for teens, YouTube curbed autoplay and late-night notifications for younger users, and Instagram restricts messages from unfamiliar adults and offers parental tools. Ready’s argument is that such steps, while helpful, don’t fix the underlying incentive problem—and that a bright-line age ban would force systemic change.
What Comes Next in the Push to Regulate Teen Social Media
Expect the battle to center on where and how age is verified, and who bears liability when kids slip through. App stores, device makers, and carriers could face new compliance obligations. Smaller developers worry about cost and complexity, while parents want usable, privacy-preserving tools that actually work.
Ready’s message is blunt: if tech doesn’t raise the bar, legislators will. Whether governments opt for outright bans, parental-consent regimes, or stricter design standards, the direction of travel is clear. For teens, the future of “social” may look less like open networks and more like curated, private-by-default spaces—by policy, not just by product design.