Disney has taken Google to task over claims that the tech giant’s AI products are assisting in widespread creation and reproduction of copyrighted characters and content, a letter obtained by Variety showed.
The filing paints Google’s generative AI services as chariots for bootleg images and videos riffing on famous Disney characters, some with Gemini branding that might suggest sponsorship.
The move highlights an intensifying battle over how AIs use creative IP and comes on the heels of news that Disney is entering into a large, multi-year licensing deal with OpenAI to place its characters in Sora — signaling plans for the kind of split where it applies gentle treatment and goes scorched earth elsewhere.
What Disney Alleges About Google’s AI and Copyrights
In the letter, Disney argues that Google’s different artificial intelligence tools can be used to produce images and videos that “reasonably approximate certain seen or heard contexts” of specific aspects of its catalog — hundreds of animated characters and a myriad visual styles, for example — which can then be delivered at scale to consumers. In pointing to Gemini-branded outputs, Disney is essentially alleging a second cause of action aside from copyright: that the use of Google marks on synthetic media opens up the floodgates to false association or endorsement, which could yield potential claims under the Lanham Act.
The core of the complaint is not that users themselves can go through a wizard to get infringing outputs, but rather that the design and training, and those distro pipelines, all contribute as a system overall to enable data copying and proliferation of copyrighted works. That framing seeks to widen the argument beyond a narrow debate about “user misuse” to that of platform-level liability.
The Legal Fault Lines for AI Platforms and Training
In the middle we have separation between training and output. AI companies tend to say that consuming copyrighted data to train models constitutes fair use, and any infringing results reflect user commands, not platform design. Rights holders respond that mass ingestion without licenses is an unauthorized reproduction, and that models can memorize and regurgitate material, especially for famous, highly repeated works.
But American courts have not resolved the question. Even high-profile cases — like those between news publishers and visual artists versus AI developers — are still in preliminary phases. The U.S. Copyright Office has warned about unlicensed training and output substitution in policy guidance and a study that prompted more than 10,000 public comments. Meanwhile, EU rules nudge toward transparency surrounding training data and provenance, which puts pressure on model makers to document sources while watermarking outputs.
The DMCA was built for hosting and linking, not generative systems. And if a platform is curating and creating content algorithmically, “notice-and-takedown” won’t help; rights holders increasingly want model-level guardrails that prevent certain characters or franchises or styles being output in the first place.
Google’s Probable Defenses And Some Technical Fine Print
Google also says that it is focused on having responsible AI policies, applying watermark technologies like SynthID, safety filters to decrease the prevalence of harmful or unsafe content in outputs, and overall publisher tools to opt out of their data use for model training. The company generally says that generative results are new transformations based on the user input and not identical copies of existing works, and that it complies with valid takedown notices.
Disney’s emphasis on Gemini-branded imagery, however, raises the curtain on provenance and branding. Watermarks and logos are supposed to foster transparency; in this fight, Disney contends that they can also cause consumer confusion when coupled with output that looks too much like trademarked characters. That argument could expand the case beyond copyright to cover trademark and false endorsement claims, where the facts also can matter a great deal — namely, how close two works are and what consumers are likely to think.
What’s At Stake For Industry And Recent Precedents
If a court or regulator adopts Disney’s framing, platforms could find themselves under greater pressure to design proactive content filters that specifically block such outputs of other protected franchises. There is similar pressure across media: Music labels are stuffing AI music makers with lawsuits over unlicensed training and soundalike tracks, and news organizations accuse AI systems of replicating their articles.
The timing also underscores Disney’s dual-track strategy. News of a three-year, $1 billion deal with OpenAI to populate Sora with characters suggests big IP holders are willing to talk AI partnerships — provided they come insulated by licenses, revenue share, and guardrails. That could give platforms an incentive to go for portfolio-wide settlements instead of exposing themselves to piecemeal litigation.
What to Watch Next in the Disney versus Google Dispute
Questions now:
- Will Disney move for an injunction if Google won’t alter model behavior?
- Is Google able to show there are safeguards that effectively prevent outputs depicting protected characters?
- Might this fight push the industry faster toward standards around rights databases, content provenance, and IP-specific output filters?
For creators and developers, the result might rewrite what “responsible AI” means in action. Platforms may also need:
- Verifiable blocks on famous IP
- Audit trails for training data
- Enterprise licensing for high-value catalogs
- Watermarking and disclosure
In any event, Disney’s letter suggests that the time of casual, unlicensed creation with marquee characters is drawing to a close for now — and that the next step in AI growth will be negotiated as much between boardrooms and courtrooms as laboratories.