Hundreds of prominent artists, including Scarlett Johansson, R.E.M., and Cate Blanchett, have thrown their weight behind a new campaign demanding an end to what they call AI slop and the uncompensated scraping of creative works. The initiative, titled Stealing Isn’t Innovation and organized with the Human Artistry Campaign, frames today’s generative AI boom as built on mass, unlicensed harvesting of culture — and urges lawmakers and tech companies to enforce consent, credit, and compensation.
Artists Rally Against AI Slop and Unlicensed Data Scraping
The sign-on letter gathers more than 800 actors, musicians, writers, and filmmakers who argue that their voices, images, and catalogs have been copied at industrial scale to train models without permission. Their message is blunt: AI firms should not be allowed to ingest creative work and then flood the market with synthetic lookalikes that dilute livelihoods and mislead audiences.
- Artists Rally Against AI Slop and Unlicensed Data Scraping
- A Coalition with Legal Teeth Presses for AI Accountability
- The Economic and Cultural Stakes of Unchecked Generative AI
- Licensing for AI Is Emerging but Remains Patchy and Uneven
- What the Campaign Seeks Now from Lawmakers and Platforms
- Why This Moment Matters for Consent, Credit, and Compensation

Johansson’s presence is especially notable after her highly publicized dispute over an AI voice that she said sounded uncomfortably close to her own. R.E.M., long protective of their recordings, and Blanchett, an outspoken advocate for creative rights, add cross-genre clout. The list even includes artists who have previously appeared in tech marketing, underscoring how quickly industry enthusiasm has given way to skepticism.
A Coalition with Legal Teeth Presses for AI Accountability
The Human Artistry Campaign is a broad coalition of creative unions, labels, publishers, and rights groups formed in 2023 to set baseline principles for AI. Its demands track with a wave of litigation now testing whether training on copyrighted works without a license is lawful. Cases filed by novelists through the Authors Guild, music industry suits from the Recording Industry Association of America against AI music startups, and a landmark complaint from The New York Times challenge tech companies’ reliance on fair use as a blanket defense.
AI companies counter that training on publicly available data is transformative and socially beneficial. But courts have yet to deliver definitive rules for generative systems, and regulators are circling. The U.S. Copyright Office has made clear that purely AI-generated output is not copyrightable and is studying training-data questions, while the European Union’s AI Act introduces new transparency obligations for model developers.
The Economic and Cultural Stakes of Unchecked Generative AI
For working artists, the concern isn’t abstract. A convincing synthetic voice can undercut paid narration or voiceover gigs. AI music that mimics a band’s signature sound can siphon streams and licensing opportunities. In the newsroom, AI rewrites can cannibalize traffic without supporting the reporting they depend on. When synthetic material floods platforms, the signal‑to‑noise ratio drops, making it harder for original work to find an audience.
Audiences are wary, too. In global research published by the International Federation of the Phonographic Industry, a large majority of listeners said AI systems should not use artists’ music without permission. High-profile deepfakes — from celebrity voice clones to political robocalls imitating public figures — have reinforced fears that a frictionless pipeline from training data to convincing forgeries erodes trust.
There’s also a technical risk: academic studies have documented “model collapse,” where systems trained repeatedly on synthetic data degrade in quality, echoing and amplifying prior errors. Artists warn that an internet saturated with AI derivatives increases that risk, as future models inadvertently ingest their own output.

Licensing for AI Is Emerging but Remains Patchy and Uneven
Some parts of the market are moving toward deals. News organizations and stock-media platforms such as News Corp and Shutterstock have struck licenses with AI developers to provide vetted content and metadata. These agreements hint at a workable path: negotiated access to archives, guardrails on usage, attribution, and revenue-sharing with rights holders.
But artists argue that piecemeal contracts won’t fix a system that treats the open web as a free buffet. They want default rules that put consent first, not last, and that make provenance and watermarking standard so consumers know when something is synthetic and whose work trained it.
What the Campaign Seeks Now from Lawmakers and Platforms
The signatories call for three clear commitments:
- No training on creative works without permission
- Clear labeling of AI-generated content
- Fair compensation when rights holders opt in
They also urge strong penalties for deepfake abuses and impersonation, reflecting lessons from recent political and entertainment hoaxes.
Their ask to policymakers is equally direct:
- Harmonize rules across jurisdictions
- Close loopholes that allow data laundering via third parties
- Resource enforcement so violations carry real consequences
For platforms, the message is to build provenance tech into upload pipelines and prioritize detection, not merely takedowns after the damage is done.
Why This Moment Matters for Consent, Credit, and Compensation
Generative AI is not going away; the question is whether it evolves as a partner to human creativity or a parasite on it. The Stealing Isn’t Innovation campaign plants a flag for the former, backed by a cohort large enough to command attention. With courts weighing precedent and regulators drafting rules, the artists’ coalition is betting that now is the window to lock in consent, credit, and compensation — before AI slop becomes the default soundtrack of the internet.
