Google has quietly taken down dozens of YouTube videos showing Disney-owned cartoon characters engaging in questionable behavior, apparently in response to a complaint from Disney, according to Variety. If you try to access many of the video pages on YouTube, you’ll now see a generic message that it has been removed following Disney’s copyright claim.
Why the YouTube takedowns happened after Disney’s request
Disney’s letter cited rampant improper use of its intellectual property throughout various franchises, from Star Wars and Marvel to animated hits like Frozen, Moana and Lilo & Stitch. The notice highlighted clips that seemed to have been generated with Google’s Veo, the company’s video-generation model, as well as a trend of AI-generated “action figure” images showing famous characters like Deadpool, Elsa from the movie “Frozen,” Homer Simpson and Darth Vader.
Modeling inputs and user prompts are key to AI’s creative surge, but Disney is clear that its characters and story worlds are protected works – unauthorized reproductions or derivative depictions, particularly those communicated on a large scale via major platforms, may infringe copyright or related rights. The takedowns highlight how quickly large rights holders are moving to police AI content that crosses those lines.
Google’s response and platform policies explained
Google said it is collaborating with Disney to address the claims and noted existing copyright controls that stretch back years across its products. The company has made a standard refrain out of YouTube’s Content ID system and the DMCA notice-and-takedown process and its “Google-Extended” suite, which allows publishers to limit certain data from being used to train Google’s AI models. In practice, compliant takedown notices result in swift removals to maintain platform safe-harbor protections.
The case raises optics peculiar to the AI age: Some of the flagged clips were generated with a Google-built model and hosted on Google’s own video platform, according to two people familiar with what happened. That dynamic will become even more pressing for platforms to introduce active filters alongside responsive takedowns of known IP, especially as generative video quality improves and volume begins to mount.
The legal background for generative AI and IP
Copyright law has long made a distinction between training and output, but courts are just beginning to probe how that line might apply to generative systems. Rights holders contend that AI tools can create unauthorized derivative works and compete with licensed merchandise or media, regardless of whether or not a model has been trained on so-called “publicly available” material. But creators and AI companies argue that many uses are transformative or user-driven, raising fair use questions that have yet to be resolved.
Disney has issued a more aggressive response than some of the other companies. The new notice to Google follows Disney’s move earlier this year to join Universal in suing the image generator Midjourney, framing mass scraping and unlicensed outputs as systemic infringement. Both publishers and authors have filed separate actions against several AI companies, suggesting wider legal risk for platforms that host or enable unlicensed generative content.
A mixed signal in Disney’s evolving AI push
The raids came roughly in concert with Disney’s increasing embrace of AI in the form of a $1 billion investment in OpenAI, according to the AI company. In a three-year licensing deal, Disney will have preferential access to OpenAI’s technology, with OpenAI planning to enable users to make Sora videos featuring Disney characters within certain limits and even feature select fan-made shorts on its streaming service Disney+.
The two-track approach is plain: Disney is aggressively shutting down unauthorized uses while also creating a tightly controlled, licensed channel for AI remixes of its IP. Look for tighter enforcement against uploads that fall outside that ecosystem, combined with new official pathways that monetize fan creativity without forfeiting control.
What It Means for Creators and Platforms
For creators, the message is simple: AI doesn’t mean you automatically get to use protected characters. Uploaders creating content with branded IP are getting more aggressive removals and potential copyright strikes, irrespective of placing a disclaimer about AI use. For platforms, the issue is operational — making better detection tools, broadening rights databases and ensuring their AI generation tools reflect the contours of licensing obligations to avoid any infringing output before it is published.
YouTube’s tools like Content ID and disclosure — say goodbye to the way things were designed in the 2000s — will likely be supplemented with model-level guardrails and auto filters based on major IP catalogs. If that doesn’t help slow the torrent, more cease-and-desist letters — and potentially new lawsuits — will almost certainly follow.