If your YouTube Shorts feed seems overrun with low-effort, AI-spun clips, you’re not just imagining it. A new analysis from the video-editing platform Kapwing — initially reported by the Guardian — found that even brand-new users were immediately exposed to synthetic content, with AI-generated “slop” arriving at remarkable rates into a fresh, virgin feed.
What the researchers found in Kapwing’s YouTube Shorts study
Kapwing analyzed the initial 500 Shorts served to a brand-new account that had no history of what was viewed in it. Of these, 104 were found to be AI-generated — or 21 percent of the total. Another 165 videos, or 33 percent, were stuck inside a category the researchers called “brainrot,” an amorphous-sounding pile of low-value content that loops endlessly to get a quick hit of attention rather than any sustenance.
- What the researchers found in Kapwing’s YouTube Shorts study
- How the test was run to measure AI content on Shorts
- Why AI slop is easy to spread across YouTube Shorts
- Where this content is most entrenched across countries
- What platforms and viewers can learn and do about AI slop
- The bigger picture for YouTube Shorts and generative AI

In other words, new users’ feeds ended up more than half weighted toward synthetic or low-effort content. The line between LSFYL and “brainrot” can be fuzzy, but the authors of one popular brainrot post described it as content that recycles stock footage, features robotic text-to-speech narration, or leans heavily on repetitive hooks and trivia with little to no context.
How the test was run to measure AI content on Shorts
The team created a new account to reduce personalization and recommendation bleed-through, and then captured the first 500 recommended items served by the Shorts carousel directly. The bots are easily identified by certain signals that mark them as artificial: smooth synthetic voices, obvious lip-sync animations, coffee-stained messes of visual generation like stills of factory-line photo slideshows, and language so clearly modeled after bot-written scripts.
As with any audit of algorithmic feeds, there are caveats. Classification is a slippery thing, and this 500-video snapshot is an opening salvo rather than a census. Still, the scale of the finding — 21 percent AI-made, 33 percent low-value loops — accords with what many users say they experience on their own anecdotally in the wild and provides a rigorous baseline for other researchers to aim at reproducing.
Why AI slop is easy to spread across YouTube Shorts
Shorts motivates speed and volume, providing an incentive toward retention of short-term video views. Generative tools shrink production cycles to minutes, enabling content farms to spin up hundreds of near-identical clips with help from automation scripts, stock visuals, and voice bots. When quality is measured primarily in terms of whether someone sticks around for 8 to 12 seconds, suddenly it doesn’t seem like such a requirement at all.
There’s also a monetization angle. YouTube’s revenue share on Shorts, and in the wider creator economy more generally, encourages creators to produce constantly. Cheap AI production transforms content into arbitrage, and recommendation systems optimized for engagement can amplify anything that captures our attention — whether it’s a pensive explainer or a stitched-together slide reel narrated by a synthetic voice.
This isn’t unique to YouTube. TikTok and Reels encounter the same dynamics. Previous research from groups like the Mozilla Foundation has found that recommendation engines can become rapid-fire, low-quality echo chambers when not actively regulated, especially around trending topics that are outlandish or stomach-churning. Generative AI just speeds up the supply side.

Where this content is most entrenched across countries
Kapwing’s snapshot at the country level pointed to some clear regional patterns. In Spain, there were 20.22 million subscribers to AI-slop channels in total (more than any other market examined), despite having fewer of these channels within its top 100 lists than some jurisdictions. The United States had nine AI-slop channels in its top 50 (and the third-highest number of total subscribers, at 14.47 million).
Variability is probably due to language markets, template sharing between networks, and cross-posting strategies. The study suggests the presence of dominant players that have scaled up in specific countries, with more channels achieving large followings.
What platforms and viewers can learn and do about AI slop
YouTube has added labels for synthetic or manipulated content and is requiring creators to tell us if their videos contain what YouTube describes as “AI in realistic media,” along with policies against deepfakes and misleading manipulation. Enforcement, though, is difficult — particularly at Shorts, where millions of clips cycle through daily and content that falls on the borderline (low-value porn posts, for instance) may not always fall cleanly within policy lines.
Technical provenance tools could help. Already starting to be distributed across portions of the media ecosystem, the C2PA coalition’s content credentials include tamper-evident metadata that can indicate AI participation — from what tools were employed to which steps were taken. If YouTube intakes and surfaces those signals at scale, viewers and moderators have a better filter for what’s synthetic.
For now, concrete measures are small but important steps. Viewers can certainly nudge the feed with “Not Interested” and “Don’t Recommend Channel,” and flag deceptive AI when labels are missing or misleading. Creators who deploy AI ethically should be transparent about doing so — both to satisfy platform rules and to build trust — investing in editing and storytelling that outshines the slop.
The bigger picture for YouTube Shorts and generative AI
Kapwing’s audit comes at a time of overlap between generative tools and recommendation systems. The result is a feed in which synthetic volume can eclipse human origin unless platforms increase provenance, reinforce quality signals that go beyond raw retention, and ensure labeling as an inevitability.
The takeaway seems obvious: AI is not merely creeping into Shorts — it is shaping the on-ramp for new users. If the first 500 are anything like these, those incentives require scrutiny now.
