Four whistleblowers say Meta discouraged and diluted internal research into children’s safety, alleging the company tightened rules to keep sensitive findings out of view and steered researchers away from plain-language risk assessments. The disclosures, delivered to Congress and first reported by The Washington Post, focus heavily on Meta’s virtual reality products and paint a picture of a company prioritizing legal insulation over transparent investigation.
What the disclosures allege
The whistleblowers—two current and two former employees—claim Meta revised its internal policies to constrain inquiries into topics such as children, harassment, race, and politics after high-profile revelations about youth mental health on Instagram. According to the documents described to lawmakers, researchers were advised to involve company lawyers to place communications under attorney-client privilege and to avoid explicit terms such as “not compliant” or “illegal” when writing up risks.

One former researcher, Jason Sattizahn, told The Washington Post he was instructed to delete interview recordings in which a teen reported a 10-year-old sibling being sexually propositioned on Horizon Worlds, Meta’s social VR platform. The whistleblowers also say employees were discouraged from discussing how children under 13 access VR experiences, despite policies barring them from the service. In one internal test described in the materials, users with Black avatars allegedly encountered racial slurs within seconds of entering a VR space—evidence, the whistleblowers argue, that enforcement and safety tooling were inadequate.
Meta did not immediately provide a detailed response to the new claims, but the company has repeatedly said it invests heavily in safety and integrity and deploys a mix of policy, detection systems, and human review to protect teens across its platforms.
Policy shifts after earlier research leaks
The alleged crackdown on language and research scope followed earlier internal findings—revealed by Frances Haugen—that Instagram could negatively affect teen girls’ well-being. Those disclosures prompted intense scrutiny from U.S. lawmakers and regulators, and kicked off a broader debate about how platforms measure, disclose, and mitigate risk to youth. The whistleblowers now contend Meta’s subsequent policy changes made it harder to produce unvarnished analyses of child safety issues, particularly in VR.
The tension here is familiar: companies argue they are protecting sensitive data and avoiding misinterpretation, while critics say legal privilege and softened phrasing can obscure the scale of harm. For researchers, the result can be a chilling effect—less specific documentation, fewer replicable tests, and slower escalation of problems that require urgent fixes.
VR risks and enforcement gaps
VR is uniquely challenging to police. Real-time voice chat, embodied avatars, and user-generated worlds complicate detection of grooming, sexual content, and harassment. Safety advocates have warned that age gates are easily bypassed, and that moderation must blend AI, proactive room-level controls, and rapid human enforcement. The whistleblowers’ accounts suggest Meta’s safeguards in Horizon Worlds have not reliably kept younger children out or protected teens from abuse.
These claims also arrive as scrutiny widens beyond social feeds. Reuters recently reported that Meta’s AI rules at one point allowed chatbots to engage in “romantic or sensual” conversations with minors before being tightened—an illustration of how quickly new product categories can introduce youth safety risks if policy does not keep pace with deployment.
Legal, regulatory, and reputational stakes
U.S. state attorneys general have sued Meta, alleging the company designed features that are harmful and addictive for young users. The Federal Trade Commission has sought to strengthen an existing privacy order that would place additional restrictions on data practices involving minors. In Congress, proposals such as the Kids Online Safety Act aim to impose a duty of care and stronger controls for teen experiences online.
Internationally, the European Union’s Digital Services Act requires large platforms to assess systemic risks, including those affecting minors, and to provide regulators with access to data and methodologies. The United Kingdom’s Online Safety Act adds obligations around preventing grooming and harmful content. If the whistleblowers’ claims are substantiated, regulators could demand more rigorous risk assessments, comprehensive access for auditors, and clearer documentation of how child safety trade-offs are made.
Meta’s defenses and what to watch
Meta has long emphasized its safety investments, pointing to tens of thousands of employees working on integrity, dedicated youth well-being teams, and features like Family Center, parental supervision tools, and teen-specific defaults that limit unwanted contact. In VR, the company highlights personal boundary features, safe zones, stricter default settings for younger users, and reporting tools accessible from within headsets.
Key questions now for lawmakers and regulators include: Were researchers systematically discouraged from documenting youth risks in clear terms? Did legal privilege shield operational issues from internal escalation or external oversight? And how quickly did Meta translate known risks into product changes and enforcement improvements?
The disclosures raise the stakes for independent audits of both content moderation and product design in VR and AI-driven experiences. They also underscore a broader lesson from past platform crises: without transparent research and unambiguous language, youth safety problems can fester in the gap between what teams know and what leaders are prepared to acknowledge.