More than a fifth of what new users see on YouTube.com/intl/en_th/creators/shorts/” target=”_blank” rel=”noopener noreferrer”>YouTube Shorts is now artificially generated filler, according to a new analysis from video editing platform Kapwing. The paper estimates that 21 percent of videos served up to a new account are “AI slop,” a term for low-effort AI-generated clips meant to snag fast views — but not to fulfill the promise of its substance.
How the study measured ‘AI slop’ in YouTube Shorts
For their experiment, researchers at Kapwing created an untrained YouTube account to see what the Shorts algorithm would recommend before any personalization had kicked in. Analyzing the first 500 recommended videos, they found that after an initial set of 16 human-made clips, 104 out of the next 484 were generated via AI — landing at roughly the 21% figure reported by Input. The trademark elements: synthetic narration or face avatars, photorealistic imagery or cheesy stock footage merged into sensational premises, and soul-sucking formulas designed for maximum watch time.

That approach is similar to how platform researchers typically test algorithmic “cold starts.” It doesn’t purport to represent the entirety of YouTube, but it does show how dense automation is in a default Shorts feed — an important finding given Shorts’ part in audience growth and creator discovery.
Where AI slop thrives across YouTube Shorts markets
Kapwing’s view analysis identifies South Korea as the biggest consumer of AI slop by far, with top channels scoring an estimated 8.25 billion views in all. One of those standouts is the channel Three Minutes Wisdom, known for its photorealistic “wild animals versus household pets” vignettes, which have garnered roughly 2.02 billion views.
Pakistan was second, with top channels amassing around 5.34 billion views. In third place was the United States with roughly 3.39 billion views, thanks in part to the Spanish-language channel Cuentos Fascinantes, which alone attracted 1.28 billion views and an estimated $2.66m in creator earnings. The lesson here isn’t geography, it’s incentives: when short-form habits are entrenched, AI-native content farms can scale fast.
Why deepfake videos are being used to promote misinformation
Costs have collapsed. With products like OpenAI’s Sora and Google’s Veo going from a prompt to a final clip, the economics are in favor of volume. Creators — and more recently, small studios — can quickly crank out dozens of near-identical Shorts in hours, see what hooks stick, and then replicate. Throw in voice clones, stock music, and multilingual captions, and each clip can be repurposed for dozens of markets without much extra work.
Critical are the incentives of the algorithm. These short-form feeds reward retention and rapid virality, not laborious originality. That incentivizes “brainrot” formats — hypnotic loops, uncanny pseudo-nature footage, cliffhanger narratives, for example — no matter if the underlying story is lightweight or fictitious. What results is an economy of attention in which synthetic omnipresence has the potential to beat human craft.

YouTube and competitors hurry to label AI content
YouTube has announced it will label AI-generated content and mandates that creators disclose synthetic media in some cases, particularly if realistic people or events are depicted. Enforcement and detection remain difficult at Shorts scale, though. Highly stylized edits and audio-only synthesis are difficult to detect by automatic classifiers, placing the onus of moderation on content creators (and reporting from users).
Rivals are also on the move. TikTok has introduced tools intended to aid users in identifying and filtering AI-generated media on their feeds. Across the industry, the direction is clear: more but uneven scrutiny. With no reliable, platform-wide signals in sight, audiences are forced to pick through what’s real and what’s merely plausible.
What the 21% signal means for new YouTube Shorts users
The headline number will not shock seasoned creators — many have watched Shorts lurch in the direction of templated, low-cost content for months. But formalizing it matters. In an untrained feed, 21% is a stake; AI-native formats are not fringe — they’re the default experience for new or logged-out viewers forming taste before humans get a fair hearing.
For YouTube, the risk is an erosion of trust and differentiation. When users increasingly open the app to find recycled synthetic clips, even if session time may go up in the short term, satisfaction may fall. For creators, the message is double-sided: AI can hasten production, but that “slop” stands for burn-out cycles, when today’s viral strike zone becomes tomorrow’s scroll-by.
Kapwing’s report doesn’t settle the debate — it deepens it. As generative tools become increasingly powerful, the question isn’t so much whether AI has a place on YouTube — it’s whether platforms can identify and surface work that informs or entertains in deeper, richer ways than mere cargo-cult mimicry. For the time being, that default feed says the battle is far from over.
