New York will mandate that social media platforms show young users warnings if they encounter features that have been derided for driving overuse under a measure Gov. Kathy Hochul signed on Friday that targets autoplay, infinite scroll, push notifications and visible like counts.
The law requires platforms to produce prominent warnings each time a young user interacts with a “predatory” feature and at regular intervals afterward. Those interstitials are not closable right away, indicating a push toward design friction that can disrupt binge-like behavior.

What the New York Law Requires From Social Apps
The bill, S4505/A5346, describes “addictive social media platforms” as services in which feeds and engagement features — like infinite scroll, autoplaying videos, push alerts or like counters — are a core component of the experience. The state attorney general gets to make exceptions if a feature serves any other legitimate purpose unrelated to extending app use.
Operationally, platforms must be able to determine when minors are about to interact with these elements and surface obvious, routine warnings related to their mental health risk. The measure builds on existing New York laws that already require parental consent before minors are shown algorithmic “addictive feeds” or data collected from them is bought and sold, and includes restrictions that make it costlier for companies to collect or sell data on users under 18.
Anticipate rulemaking that follows to determine who is considered a “young user,” what must be included in the warning, how long it is to be presented and how compliance will be audited. Age assurance will be another battleground, as platforms weigh the need to gate accurately against privacy responsibilities and data minimization.
The Return of the Warning Label for Social Media
The move follows a recommendation last month from the U.S. Surgeon General for warning labels to be placed on social apps that mimic decades-old tobacco warnings and alerts on alcohol products. The concept is not a panacea, but rather a low-friction nudge that encourages greater awareness of risk at the time of use.
It adds urgency that evidence on youth well-being is now coming in. The Surgeon General’s advisory about social media and youth mental health emphasizes connections between excessive, algorithm-driven reading and viewing, and disrupted sleep, anxiety, attention difficulties. CDC surveys have found deteriorating mental health trends among teenagers, and Pew Research estimates that about 46% of teens in the United States say they are online “almost constantly.”
Design choices matter. Infinite scroll eliminates stopping cues; autoplay reduces decision windows; push notifications convert sporadic check-ins into regular twitchiness — mechanisms, says scrolling’s foremost analyst Adam Alter, that behavioral scientists associate with habit formation and variable reinforcement.

How Platforms Could Adapt to New York’s Warning Rules
To conform, companies can gate scrolling and autoplay for those under 18 years old with interstitial warnings about how social media can be addictive; throttle push alerts between 10 p.m. and 6 a.m.; and hide like counts by default. A few might include a “youth mode” that limits engagement features across the board, and records exceptions for legitimate uses like safety alerts or accessibility.
It brings up technical questions, and questions around user experience: Will warnings be displayed per session or per feature? How long do they need to stay on the screen? What metrics demonstrate effectiveness? “It’s probably the case that — if there is any response at all, and I’m being agnostic on it… the most effective design will combine warnings with some sort of friction, like pauses or opt-ins or time limits,” as opposed to disclaimers alone.
Legal and Industry Reactions to New York’s New Law
Anticipate pushback from tech trade groups who have fought youth online-safety laws in other states, saying compelled warnings are a First Amendment violation and that federal statutes preempt state regulations. Courts are divided on these issues, and New York’s specific formulation — undergirded by health risks and feature-specific triggers — will be challenged.
New York is not alone. California lawmakers have introduced similar proposals, and multiple states have adopted rules on the feeds of teenagers, age verification and data use. As a result, there is now a patchwork of compliance that could pressure platforms toward a uniform national standard — even in the absence of federal legislation.
What to Watch Next as New York Implements Warnings
The warning language, timelines, penalties and other specifics will be determined by guidance from the attorney general. Independent evaluation will be critical: warning labels on cigarettes increased awareness of risk but digital environments are different, and researchers will watch for measurable declines in time-on-platform and nighttime usage among teens.
Hochul’s approval continues a series of tech policy actions in the state — including an AI safety law — that signposts a wider effort around platform accountability. For families and schools, the new labels won’t be a substitute for digital literacy or parental controls — but they could deliver timely reminders that make disengaging just a bit easier when it matters most.