Merriam-Webster has declared “slop” its word of the year, a strong echo of the flood of low-quality, AI-generated content that’s washing over the internet. The dictionary presents the term as a moniker for digital content that is cheaply and rapidly produced in mass quantities with artificial intelligence — content that resembles the real thing but can lack originality, accuracy, or thoroughness.
It’s a choice that reflects the culture. It recognizes that AI is not merely a breakthrough technology, but also a force altering what we read, see, and share — and stirring up a new vocabulary of skepticism. As the leadership of Merriam-Webster told The Associated Press, “The word is extremely powerful as new rules and order have emerged in recent months because it’s used in such a specific way.” Slop feels right for these circumstances because it is itself something of a play on words: It’s vivid, somewhat satirical (for all that people have been saying this term all year since at least July), and blunt about the problem that it names.

Why ‘slop’ rose to the top of Merriam-Webster’s list
“Slop” was shortened this year to mean content that feels algorithmic: tedious product reviews that say nothing new, rewritten news articles patched together by bots, videos narrated in a machine voice, and auto-generated e-books cluttering retailer listings and search results. The phrase reflects online frustration with feeds crowded by barely differentiated near-duplicates, spammy listicles, and low-effort media engineered purely for clicks and ad impressions.
Watchers of language note that Merriam-Webster usually weighs lookup statistics with editorial judgment. While the company does not publish full methodology, searches for AI-adjacent terms and explosive debates over authenticity spiked throughout the year amid a burst of ever more capable media models generating endless questionable outputs that went viral precisely because they were terrible.
From content farms to culture: how AI changed creation
OnlyFans exploded in tandem with the rise of advanced generators — everything from text and image systems to video tools like OpenAI’s Sora and Google’s Veo, which have conspired to lower the cost of creating passable content to near zero. And the result: AI-authored books tying up Amazon, fake podcasts masquerading as news, jingles and ads cooked up in minutes — even feature-length experiments fabricated from a prompt or two.
Gatekeepers have been playing catch-up. After a tide of AI-written works muddied genres from romance to how-to, Amazon’s Kindle Direct Publishing introduced disclosure requirements and daily submission maximums. NewsGuard has identified hundreds of these mysterious “news” sites run by AI bots, creating endless articles to harvest ad revenue. Social platforms and ad networks have updated labeling and policy language, but enforcement is behind the scale of automation.
Education, law, and cybersecurity have not been spared. Teachers say they’re seeing a growth of AI-fueled essays that bypass the most elementary plagiarism filters. Law firms have been publicly humiliated by filings that contained made-up citations. The industry is already crowded with hundreds of automated reports and duplicated threat intel; security researchers warn that it could undercut trust and lead to less-assessed analysis. “Slop,” in other words, is interdisciplinary.

The slop economy and its incentives across the web
The economics are simple: If programmatic ads pay depending on volume, and SEO rewards frequent updates, there is money to be made in flooding the zone. So-called content mills are cheaper than ever and backed by AI, meaning they can scale what used to require writers, voice actors, and editors/producers. Even if most posts engender only a trickle of traffic, millions of pages is no small matter.
Not only that, but critics warn of a “two-tier internet”: premium, paywalled, human-led reporting and art on one end, and an ocean full of indistinguishable low-quality filler at the other extreme.
Consumer trust wanes, and smaller publishers are squeezed as the discovery algorithms fight to bring original work out of the noise. Efforts like the Coalition for Content Provenance and Authenticity — which suggests tamper-evident metadata for media — are intended to be helpful, but adoption has been patchy.
A global lexicon shift reflected in other dictionaries
Merriam-Webster’s choice falls within a larger lingual moment. The Australian Macquarie Dictionary picked “AI slop,” detouring down a similar avenue of worry with regional piquancy. Oxford chose “ragebait,” a term that emphasizes engineered outrage as a growth strategy. Collins delighted in taking “vibe coding” — or subtle signaling of identity and intent — to new heights. Together, the selections signal a year that was all about manipulation — of attention, aesthetics, authenticity.
What the choice signals for platforms, publishers, and readers
Crowning “slop” the word of the year is not to indict all AI output; it highlights what failure modes everyday audiences encounter every day. It is a reminder that speed and scale do not necessarily mean quality — and that provenance, transparency, and editorial standards actually matter more, not less, in an automated era.
For platforms and publishers alike, the signal is straightforward: Invest in trust. Use content source-tracking tools, tag synthetic media, and incentivize original reporting and creative work. For readers, the behavior is also simple: Weigh known sources over unknown ones, be skeptical of one-click summaries, and read carefully to see which claims are more warranted than others. “Slop” caught on as a joke, but it stuck because it names a real cost — and a choice about the internet we want next.