Disney is ordering Google to discontinue the use of copyrighted works from some of its most popular shows and films, including (sources say) those found on Disney+, as raw material for AI research at the search giant. In a letter viewed by industry press, Disney is alleging that Google copied Disney’s content without consent and has benefited from AI outputs reproducing protected characters and styles.
Disney Accuses Google of Stealing Its Catalog
The letter states that Google’s Gemini tools can create imagery and references with characters from Star Wars, The Simpsons, The Avengers, Spider-Man, Frozen, The Lion King, Moana, Deadpool, Toy Story, Brave, Ratatouille, and Inside Out—which Disney says suggests the use of its copyrighted catalog for training. Disney also contends that Gemini’s watermarking creates the impression for consumers of an endorsement, obscuring lines between homage and pastiche, or flat-out duplication.
Disney adds that it had repeatedly expressed concerns and urged that safeguards be put in place, but Google has failed to take sufficient steps to address ongoing abuse. The company posits the issue as partly a training-data problem and partly a distribution problem: Even if copying took place upstream, the downstream product purportedly replicates and monetizes protected expression.
Google Cites Public Web Protocols and Longstanding Ties
Google, which has been advocating for the use of public web data for AI research like this, said that it has had a long relationship with Disney and plans to continue working together. The company has promoted watermarking products like SynthID, and it has some publisher-level controls for how data can be put to use, but it hasn’t publicly acknowledged that systems like Gemini were trained on Disney-owned works specifically. As with other AI vendors, Google has not provided a full list of training sources for image and multimodal models.
This recent battle is part of a wider debate between AI platforms and rightsholders over when data can be considered “publicly available,” and whether scraping, ingesting, or transforming copyrighted material used to train models falls under fair use. The law is anything but clear, and companies are trying wildly different interpretations of what constitutes consent and compensation.
The Stakes for Fair Use and Training Data
Courts are now adjudicating a number of lawsuits that involve AI training on copyrighted works, and media companies, visual artists, and record labels are all eager to draw the line between transformative use and substitution. The US Copyright Office has recognized a sense of uncertainty, and it is exploring how the current law applies to undertaking training or training AIs and the content they generate, while also calling for transparency on datasets and better provenance information around generated outputs.
In Europe, the AI Act will compel general-purpose model providers to release a sufficiently detailed abstract of copyrighted training data. That moves the industry toward disclosure and licensing at scale. For companies such as Disney, which creates new monetization opportunities from iconic IP across films, streaming services, merchandise, and parks, the concern is not just economic erosion but “dilution” of their brands when AI tools can instantly replicate recognizable characters and styles.
Licensing Deals Offer an Alternative Path
Disney has also taken a licensing approach, with a multiyear deal that permits authorized AI platforms to legally incorporate a slate of Disney characters. The message is that the door can open, but on negotiated terms. Hollywood’s actors’ union SAG-AFTRA has endorsed Disney’s letter to Google and stressed the importance of putting a stop to the unauthorized use of performers’ images, likenesses, and performances in generative systems.
Similar frictions have erupted throughout the industry, as news publishers, stock image libraries, and music rights holders have lobbied for consent, credit, and compensation. Some have used technical blocks or opt-out signals, although researchers argue that such signals are ignored and/or too narrow in scope to be an effective training signal. Together, they have created a patchwork system of compliance, and it is increasingly leading to pressure for common rules.
What to Watch as the Dispute Between Disney and Google Heats Up
The key questions now revolve around provenance and provisions: can Google technically prevent Gemini from regenerating characters that are transparently sourced to Disney IP, and will it ensure dataset transparency and licensing as required? If not, Disney’s assertions could progress from threats to lawsuits, adding to a docket of cases that will help define the limits and opportunities for fair use with AI.
The wider trend in the industry is toward hybrid training models: a blending of public-domain content, synthetic data, and licensed material—usually backed by watermarking, C2PA-style content credentials, and more stringent output filters. The extent to which Google and Disney are able to come to terms that accommodate innovation and control will be a litmus test for how entertainment IP and generative AI can coexist.