Instagram’s chief, Adam Mosseri, is blaming a strategy change as AI-generated “slop” floods feeds. In a series of comments posted to Threads, Mosseri suggested that authenticity is a scarce (and therefore valuable) asset, and the platform needs fresh tools to lift up “raw” human creators. And his message is clear: In a world in which everything can be engineered to perfection by machines, imperfection has become the new sign of life.
The problem Mosseri aims to fix on Instagram feeds
Instagram is struggling to cope with a rising tide of AI-generated images and videos, from photoreal avatars and environmental cautionary tales to fakely flamboyant politicians and synthetic influencers that suck up attention at scale. The speed of detection alone is unlikely to keep up, Mosseri cautioned, because AI is getting better and better at mimicking reality. Rather, the upshot is that platforms would do even more than this reactive labeling: They would design systems that surface legitimate accounts and original work from real people.

That position mirrors a broader industry shift. Big platforms have been scrambling to install labels and disclosure tools, but there are still holes. Meta brought AI-generated content labels to Instagram, Facebook, and Threads in 2024 by mixing user disclosure with technical signals, but not all AI media gets caught. TikTok has also included a “Manage Topics” setting feature that would allow users to see less AI-generated content, indicating the issue has gone from niche concern to universal consumer demand.
Fingerprinting real media, not hunting fakes
Mosseri proposed one hell of a strategy shift: make it easier to show what’s real and reduce the chase of what’s fake. It dovetails with the work of the Coalition for Content Provenance and Authenticity (C2PA), an open standard supported by companies that include Adobe, Microsoft, BBC, Sony, Nikon, and Arm. The idea is to marry cryptographic signatures and tamper-evident “content licenses” to media at capture and through edits, so that viewers can trace an image or a video back to its origins and forward through changes.
Hardware is finally starting to catch up. Leica, however, has begun distributing its M11-P with default Content Credentials, and Sony has shown in-camera signing on C2PA principles. Nikon and Canon are involved in similar provenance initiatives. If Instagram can sensibly read and preserve these signals on uploads, the platform could help surface verified real-world media in feeds, reassure audiences, and lower the stakes of an arms race with undetectable fakes.
Practically as well as technically, that’s the challenge. Social platforms also compress or transcode files, sometimes stripping away metadata. A durable provenance pipeline would ensure each capture app and camera to editing tool to social distribution looks consistent without breaking the cryptographic chain.
Ranking originality and source credibility
In addition to provenance, Mosseri said Instagram should “surface credibility signals about who’s posting” and keep working on ranking based on originality. And that comes after earlier steps by Instagram to limit the reach of aggregator accounts in an attempt to reward creators who posted new material, rather than reposting other people’s content. Look for more algorithmic attention to first-published material, transparent account histories and indications of real-world identity — especially from the accounts influencing news, culture or commerce.

The trade-off is delicate. Blow past provenance and verification, and the people who make smaller or privacy-conscious creations may be punished. Move too slowly and the creators that film, draw or photograph messy, human aspects of life lose out to a plastic perfection that costs next to nothing to churn out. The winning formula will have to combine building trust with low-friction creation and discovery.
Tools that embrace more of the “raw” aesthetic
Mosseri anticipates a growing appetite for unfiltered, unflattering and unproduced posts as audiences weary of machine-smooth feeds. That clears the way to features that prioritize in-camera veracity: Minimal-edit capture modes, provenance badges on Reels and Stories, creation flows that emphasize process as much as output. We can expect AI to take another supporting role here, as well — helping with edits, captions and accessibility work — again making it known when it’s doing so.
The advice is simple for creators. Publish new work often, document how it was made and grow signals that prove you’re a person, not a pipeline. Accounts that tell intent, display behind-the-scenes and welcome a little imperfection might get more play when Instagram tunes ranking to elevate human texture over algorithmic sheen.
How rivals are responding to the rise of AI content
TikTok’s “see less AI-generated content” button is a striking experiment in giving end users control over their own preferences. YouTube has implemented labels for realistic doctored videos, and new disclosure requirements for creators, as well as punishment for not marking synthetic media. These maneuvers hint at a consensus taking shape across platforms: Transparency alone won’t solve the flood, but as provenance tech and ranking reforms mature, it’s a baseline for trust.
What to watch next as Instagram tackles AI challenges
Mosseri declined to detail any specific actions or timelines. Those near-term markers will be easy to identify if Instagram starts to read and display C2PA-style credentials, roll out originality scoring at scale, and present visible markers of credibility users can adjust. The longer-term question is whether provenance and ranking changes can bias engagement toward human creators without undermining the unplanned allure of spontaneous engagement that made Instagram popular in the first place.
For the moment, the company’s bet is both philosophical and pragmatic: when AI is always busy perfecting everything, some of the most valuable content might be what you can only make once, in real life, with real people watching.
