The Motion Picture Association has formally requested that Meta cease calling the teen experience on Instagram “PG-13,” claiming that the company is trading off the credibility of a film rating the MPA has been administering for decades. The cease-and-desist, earlier reported by The Wall Street Journal, claims that Meta’s framing is “literally false and highly misleading” because studio-backed ratings are not similar to social media controls grounded in automation-based moderation.
Why the MPA Is Pushing Back on Instagram’s PG-13 Label
At stake is brand image. The MPA’s Classification and Rating Administration introduced PG-13 in the 1980s to warn families that a movie might include some content not suitable for children under 13. It gradually became an international shorthand for age-appropriateness in theaters and streaming, with human review and built-in guidelines for language, violence, sexual content, and thematic material.

The MPA maintains that using PG-13 on a social site condones and equates the material. The association says that Meta’s process “relies heavily on artificial intelligence,” which should not be compared to the MPA’s. The concern is not just about being misused as a trademark but a fear that moderation errors — which are inevitable when you’re dealing with the vast scale of an AI system — could undermine public faith in the film ratings themselves. The MPA has long argued its marks are certification indicators, a special kind of trademark with tightly bound rules so there is no confusion about who certifies what.
Meta’s Argument, and the Issue of Fair Use
Meta responds that it never said the MPA certified, and Instagram has “PG-13 guided” settings, not an actual rating.
The company also claims its references are fair use — nominative fair use, which permits a company to reference a mark in order to describe compatibility or inspiration so long as it doesn’t imply sponsorship.
And legal experts say the line is thin. Even truthful references to a mark are subject to objection under the Lanham Act when such references confuse the public or imply endorsement. The MPA’s letter suggests that it considers Meta’s phrasing to be more than just a comparative benchmark, or only marketing puffery. If the dispute grew to require a court to weigh in, it would probably consider whether Meta used no more of the mark than it needed to, as well as its veracity and potential for confusion, among other factors, including whether users inclined toward confusion would conclude that there was an association with the MPA.
What It Means to Be Rated PG-13 vs. Those Platforms’ Content Ratings
Regulating social media and rating films are not analogous issues. CARA’s PG-13 rating is based on a single, static work before it’s released. Platform safety settings, on the other hand, triage a firehose of text, image, and video as it happens. Even the most sophisticated moderation is a mash-up of machine classification and human review, and still can’t grasp context, sarcasm, or code words.

That operational gap matters. Studies of academic audits have shown quantifiable rates of false positives and negatives in machine moderation, particularly when it comes to sensitive or context-aware issues. A label attached to a film rating could falsely imply accuracy in an environment where errors are inevitable on a mass scale. It’s one reason other platforms veer away from movie-rating terminology: YouTube refers to “age-restricted” categories, while TikTok introduced “Content Levels,” which can be used to screen material for younger teens without evoking the presence of outside certification marks.
The Stakes for Teens and Platforms as Safety Settings Evolve
Instagram’s teen protections are under scrutiny. Meta has faced lawsuits from several state attorneys general over the alleged harm to young users, and European regulators have toughened rules on targeting and accessing content under new digital regulations. Pew Research Center polling indicates that Instagram is still a go-to product for U.S. teens, being used daily alongside YouTube, TikTok, and Snapchat — making any shifts to default content filters significant at population scale.
From an industry perspective, the MPA is concerned that a label that studios, parents, and film exhibitors use when marketing movies and setting admission standards will be watered down. For Meta, the branding assertions probably served to signal a familiar threshold for what teens would be shown by default. The challenge: adopting a hallmark certification without the certifier adds up to legal and perception risks, regardless of the intentions of helping streamline safety messaging.
What Comes Next in the MPA and Meta PG-13 Dispute
A quick settlement would likely center on revised language — perhaps dialing down references to “PG-13,” in favor of platform-native labels — while preserving the substance of Meta’s settings. If neither party backs down, the suit could become a test of how courts interpret trademark and false advertising law around safety claims in algorithmic systems — pressing a question that is becoming more urgent as tech companies appropriate familiar consumer labels when explaining complex AI-driven controls.
Either way, look for platforms to tread more lightly when invoking third-party certifications. For parents and the adolescents who grouse about it, that common-sense takeaway remains the same: platform controls are better but not foolproof; they aren’t a movie rating. A language cleanse from tech companies — and actual, verifiable transparency about how these filters work — would do more to foster trust than any lent acronym.
