MrBeast is ringing the warning bell on artificial intelligence, suggesting the technology could kick millions of people who make a living online to the curb. “This is scary times,” the world’s most-watched YouTuber said, a frank assessment that’s felt throughout the creator economy as AI-generated video and audio zip from novelty to mainstream. From an artist whose actions have frequently signaled the coming of industry tides, that warning carries unusual weight.
A Warning From YouTube’s Top Creator on AI’s Impact
Jimmy Donaldson, a.k.a. MrBeast, is No. 1 on Forbes’ newest list of creators with an estimated $85 million in earnings and more than 600 million followers across platforms. When he wonders whether AI might undermine human-generated content, smaller creators — already at the mercy of algorithm tweaks and fickle ad markets — hear both diagnosis and augury.
His position is complicated by his own experimentation with AI. (The publication later apologized.) Earlier this year, he drew criticism for a design tool that generated A.I. thumbnails and released it via his analytics platform; he quickly pulled it and told fans that they should hire human artists instead. The course correction highlighted a more fundamental tension: Creators want AI to produce gains in productivity — there is nothing inherently utopian about this kind of liberation — without eroding the value of creative labor.
AI Video Tools Are Racing Ahead Across Major Platforms
The latest is OpenAI’s Sora 2 and its new mobile app that allows users to spin up AI clips — often of themselves — in a TikTok-style vertical feed. Early traction here shot the app to No. 1 on the U.S. App Store, a warning sign of synthetic video entering the same attention competition as soft squishy humans — so keep it simple when you animate your avatar, OK?
The big platforms are leaning in, too. YouTube has introduced AI-assisted editing, automatically generated highlights for livestreams and podcasts, and chat prompts in YouTube Studio to aid strategy-building. Instagram, TikTok and Snapchat are experimenting with testing or labeling A.I.-created content. The creative bar to post something eye-catching is falling fast — even more content vying for the same minutes of attention.
Not everyone is impressed. A rising cohort of viewers dismisses low-effort, high-volume synthetic clips as “slop” while calling for feeds that are not clogged with uncanny or clickbaity videos. But as models get better, the line between AI-made and human-made will blur — and that detectability will be uneven, placing a premium on trust and transparency.
The Economic Squeeze on Creators Intensifies With AI
Two long-running pressures are intensified by AI: supply overload and revenue volatility. If tools for creating AI content flood the platforms with good enough stuff at nearly zero marginal cost, well, ad rates can drift lower as inventory grows. Mid-tier creators — the lifeblood of the creator economy — are most vulnerable to RPM compression and reduced discoverability.
There’s also a substitution effect. Brands and publishers can create synthetic presenters, voices or product demos rather than mounting humans for some formats. Goldman Sachs has estimated that hundreds of millions of jobs around the world could be affected by automation more broadly; while creator work is less assembly line and more art, entry-level production tasks are squarely in A.I.’s sights.
The size of the ecosystem sets the stakes. By one count, there were hundreds of millions of people creating content worldwide in Adobe’s Future of Creativity study — and the population has only swelled since 2020. It doesn’t take more than small perturbations in monetization rates before you’re largely cleared out, particularly for people who rely on platform payouts rather than diversified income.
Trust, Disclosure, and Rights for Creators in the AI Era
Maybe the risk to reputation is as big as the one of monetization. And if audiences find out that a creator used AI on the sly for large swaths of a video — scripts, faces or voices — they can turn en masse. The Federal Trade Commission has stressed clear disclosures for endorsements and misleading advertising, and now regulators are taking a closer look at fake media.
Platforms are moving, too. YouTube, Meta and TikTok have added labels for AI-generated or altered content and forms for reporting deepfakes. At the same time, regulators are moving ahead on provenance and watermarking regulations; a forthcoming EU AI framework calls for clarity around synthetic content, and media coalitions are lining up systems like C2PA to certify originals.
Creative ones have already worked out guardrails. Writers’ and actors’ strikes in Hollywood ushered in language around consent and compensation for AI training and digital doubles — principles that will almost certainly migrate to brand deals and influencer contracts now that likeness cloning tools are readying for prime time.
What Creators Can Do Today to Stay Resilient Amid AI
Lean into what AI has a harder time faking: community, access and lived experience. Formats predicated on personality — live streams, behind-the-scenes footage, interactive challenges, real-world builds — are more difficult to synthesize at scale in believable ways. Transparent identification of AI-assisted features can help establish trust, not diminish it.
Diversify revenue beyond platform ads, including with memberships, courses, licensing and owned channels like newsletters or podcasts. Make sure you protect your voice and likeness with watertight contracts and use provenance tools where possible. Use AI as a co-pilot for research, drafts and post-production — not a ghostwriter of your identity.
MrBeast’s message is not anti-technology; it’s a sign that incentives are changing fast.
As AI-generated video clogs the feed, those creators who protect their brands, welcome open workflows and foster deeper relationships with audiences will be best placed to weather the storm — and turn “scary times” into a strategic advantage.