A newly unsealed court filing shows Instagram head Adam Mosseri pressed on why it took years to release key teen safety tools, including a DM nudity-blurring feature, despite internal awareness that private messages could expose minors to explicit content. Prosecutors focused less on what Instagram has built recently and more on why those protections arrived so late.
Court Filing Puts Mosseri Under Scrutiny
In testimony revealed through the filing, attorneys cited an internal exchange between Mosseri and Meta’s security leadership in which he acknowledged “horrible” things could happen in Instagram DMs. Lawyers argued the risks included unsolicited explicit images to minors. Mosseri agreed such scenarios were possible but pushed back on the notion that the company should have warned parents that messages aren’t actively monitored beyond efforts to detect and remove child sexual abuse material.
Mosseri framed the issue as a familiar trade-off: users expect privacy in messaging, while the platform must mitigate harm. The filing indicates prosecutors’ central aim is to establish that Instagram knew the dangers to teens for years but moved too slowly to deploy a product fix that could curb exposure in private messages.
Years-Long Gap Before Instagram DM Nudity Filter
Instagram eventually introduced a setting that automatically blurs suspected nude images in DMs for teen accounts, forcing users to tap through a warning before viewing. The feature relies on image-level detection to reduce exposure to unwanted content and to disrupt grooming tactics that often begin with boundary-testing messages in private chats.
Prosecutors argued the delay matters more than the feature’s current form. During that gap, teens continued to receive unsolicited sexual images. Instagram, for its part, has pointed to a broader safety stack: default-private accounts for younger users, limits that stop unknown adults from messaging teens who don’t follow them, sensitive content controls, and a Family Center with parental supervision tools. The question before the court is whether those safeguards came fast enough, and whether business incentives slowed their arrival.
Data Underscore Exposure Risks For Teens
The filing disclosed internal survey data indicating 19.2% of respondents ages 13 to 15 reported seeing nudity or sexual images on Instagram that they did not want to see. Another 8.4% of teens in that same age range said they had encountered self-harm content on the app within the prior week of their usage window. These figures align with long-standing warnings from youth-safety groups and health authorities about the frequency of harmful content exposure online.
External context amplifies the stakes. The U.S. Surgeon General has urged stronger default protections for minors, and the National Center for Missing and Exploited Children has documented steady growth in online enticement reports over recent years. With Instagram among the most widely used platforms for U.S. teens, even small exposure rates translate into large absolute numbers of affected users.
Why Safety Features Take So Long to Reach Teens
Building a reliable nudity filter at Instagram’s scale is technically complex. On-device classification must be fast and accurate, minimize false flags for nonsexual content (like medical or breastfeeding imagery), and work across languages and cultures. Privacy design choices, including the use of stronger encryption in messaging, also constrain server-side scanning and push companies toward local device analysis and more conservative interventions.
Prosecutors counter that resource allocation and growth priorities, not just technical hurdles, drive timelines. Safety features that add friction can reduce session length and message volume, and even a 1–2% drop in engagement can be material for ad-driven platforms. That tension—between reducing harm and preserving engagement—sits at the heart of the lawsuits.
Legal and Policy Pressure on Platforms Intensify
The case originates in the U.S. District Court for the Northern District of California and is part of a broader wave of litigation alleging that major platforms are defective by design because they maximize screen time in ways that harm minors. Defendants include Meta, Snap, TikTok, and YouTube. Parallel actions are underway in Los Angeles County Superior Court and in New Mexico, with plaintiffs seeking to show that companies prioritized user growth over youth safety.
At the same time, policymakers are tightening the screws. Several U.S. states have advanced or enacted laws on teen access, age verification, and default safety settings. Abroad, the UK’s Online Safety Act and the Age-Appropriate Design Code have set de facto global expectations around risk assessments, teen-first defaults, and proactive moderation of harmful content.
What to Watch Next in Instagram Teen Safety Case
The key questions now are practical: Will courts push platforms toward default-on protections with clearer timelines and public reporting? Will Meta disclose outcome metrics—like reductions in reports of unwanted nudes and grooming attempts among teens—and allow independent audits? And will teen-focused filters expand to broader user groups as a universal safeguard against image-based abuse?
The court filing makes one thing plain: prosecutors are less interested in how polished Instagram’s teen tools look today and more in why a widely anticipated protection, the nudity filter for DMs, arrived only after years of documented risk.