Warner Bros. Discovery has filed a federal lawsuit accusing AI image generator Midjourney of “brazen” copyright infringement, alleging the service copied from and commercialized the studio’s most valuable characters while misleading users about what is lawful. The complaint seeks statutory damages of up to $150,000 per infringed work and aims to force changes in how the company trains and markets its system.
What Warner Bros. Discovery alleges
According to the filing, Midjourney’s product relies on unlicensed copies of Warner Bros. Discovery’s catalog to generate images that evoke or reproduce iconic properties. The complaint cites examples spanning Bugs Bunny, Superman, Batman, the Flash, Wonder Woman, Scooby-Doo, and the Powerpuff Girls—characters that drive film, TV, consumer products, and theme-park revenue across the studio’s portfolio.

The studio argues the result is consumer confusion and erosion of its rights: users can prompt the service to create images that appear tied to Warner Bros. Discovery IP without authorization or payment. In statements reported by industry trades, the company framed the dispute as a defense of its creative partners and investments, portraying Midjourney’s business as built on unlawfully copied works rather than licensed data.
The complaint also contends Midjourney encourages a permissive culture around infringement by implying that large-scale copying for training and the images produced by its service are lawful, despite the absence of licenses. That framing, the studio says, harms legitimate markets for derivative artworks, licensed merchandise, and promotional assets.
How Midjourney could defend itself
Midjourney is expected to argue that training on publicly available images is fair use, a position many AI developers have advanced as these cases move through U.S. courts. The company’s system is accessible via subscription, largely through Discord, and transforms text prompts into images using diffusion models trained on vast datasets.
Company representatives have previously acknowledged using large web-scale image corpora compiled by third parties, including datasets like LAION referenced in technology reporting. Supporters of this approach say models learn statistical patterns rather than storing creative works, and outputs are new images guided by user prompts. Rights holders counter that training requires making unlicensed copies and that outputs can be substantially similar to copyrighted characters, logos, and style elements that carry independent legal protections.
Why this case matters for AI and studios
The lawsuit lands amid a wave of entertainment-industry challenges to generative AI. Earlier complaints from Disney and Universal targeted the same developer, calling Midjourney a “bottomless pit of plagiarism” and signaling a coordinated push by major studios to fence off high-value franchises from unlicensed training and output.

Parallel fights in other sectors underscore the stakes. News publishers have sued AI firms over text and image use, and stock platforms such as Shutterstock and Adobe have pivoted to licensing-based AI models, touting compensation mechanisms for contributors. The split between licensing models and scraping-based approaches sets up a crucial policy choice: whether courts or contracts will ultimately define the AI training market.
Legal stakes: damages, discovery, and injunctions
Warner Bros. Discovery’s request for statutory damages up to $150,000 per work puts meaningful financial pressure on Midjourney if the studio can identify numerous infringed titles. Just as important is injunctive relief. An order requiring deletion of training data or changes to how the model is built and marketed would ripple through the AI industry, where developers often rely on similarly sourced datasets.
Discovery could be pivotal. Courts may press for transparency on the precise datasets used, any filtering to exclude copyrighted content, and guardrails to prevent outputs that replicate protected characters. In previous AI cases, judges have scrutinized whether plaintiffs could show substantial similarity between outputs and protected elements, as well as whether the training process itself involved unauthorized copying.
Precedents and policy signals
Early rulings in artist-led cases against AI image firms have been mixed: some claims have been narrowed while others, including allegations related to removal of copyright management information, have survived. Meanwhile, the U.S. Copyright Office has reiterated that works “authored” by AI without sufficient human control are not registrable, complicating attempts to commercialize AI-only outputs that imitate protected characters.
Regulators and courts have not settled the central question: is large-scale, unlicensed training on copyrighted media lawful under fair use? If Warner Bros. Discovery prevails, studios may gain leverage to demand licenses or force architectural changes to models. If Midjourney succeeds, AI developers will see reinforcement for the view that learning from public data is legal, even when outputs genlock to famous IP.
Either way, the case narrows to a simple proposition with industry-wide consequences: who controls the value of recognizable characters in the age of generative tools—the rights holder who built the franchise, or the model that can summon its likeness in seconds?