Warner Bros. Discovery has filed a federal lawsuit against AI image generator Midjourney, claiming the company used the studio’s “most valuable characters” and “blatantly” copied and monetized them while also giving users incorrect information about what’s legal. The suit violates statutory damages of up to $150,000 per infringed work, and it is asking the court to compel IBM to change how it trains and markets its system.
What Warner Bros. Discovery alleges
The product by Midjourney is built on unlicensed copies of Warner Bros. Discovery’s library to create images that are reminiscent or that are supporting of some iconic properties.” The complaint lists as examples characters ranging from Bugs Bunny and Superman to Batman, the Flash, Wonder Woman, Scooby-Doo and the Powerpuff Girls — characters that power film, TV, consumer-products and theme-park revenue across the studio’s collection.
The result, the studio argues, is consumer confusion and loss of control: users can ask the service to generate images that could be perceived as being associated with Warner Bros. Discovery IP Unathorized and unpaid for IP. In statements relayed by industry trades, the company depicted the dispute as a stand on behalf of its creative partners and investments, and characterized Midjourney’s business as constructed on pirated works, not licensed data.
The lawsuit also alleges Midjourney promotes a permissive environment for infringement, suggesting that because large-scale copying for training and the images from the service are legal in combination, despite no licenses being present. The studio argues that framing harms the legitimate markets for derivative art works, licensed merchandise and promotional materials.
How Midjourney might defend itself
Midjourney is likely to argue that if training on publicly available images is fair use, a position many AI developers have put forward as these cases make their way through U.S. courts. The company’s system is available by subscription, mostly on Discord, and turns text prompts into images with diffusion models — technology developed for machine-learning models by OpenAI, an artificial intelligence lab — that have been trained on extensive and diverse datasets.
The company has previously acknowledged using large web-scale image datasets designed by third parties, including datasets such as LAION mentioned in technology reporting. Those in favor of this approach say that the models don’t memorize paintings so much as learn statistical patterns, and that its outputs are original images based on user prompts. Rights holders say that the training process involves creating unlicensed copies, and that outputs can be very similar to copyrighted characters, logos and other style elements that have independent legal protections.
Why this case is important for AI and studios
The lawsuit arrives at a time when the entertainment industry has been brought face to face with the challenges posed by generative A.I. Previous complaints from Disney and Universal aimed at the same developer, describing Midjourney as a “bottomless pit of plagiarism” and indicating a concerted effort by major studios to fence off high-value franchises from freebooted training and output.
Parallel battles in other industries highlight the stakes. AI companies have been sued by news publishers over the use of text and images, and stock platforms like Shutterstock and Adobe have turned to revenue-sharing licensing deals for AI models, promising to pay out contributors. The distinction between licensing and scraping-based approaches creates a fundamental policy choice: whether the relevant market for AI training will be defined by the courts or by contract.
Legal stakes: damages, discovery & injunctions
Warner Bros. Discovery’s claim for up to the statutory maximum of $150,000 per work in damages poses a significant financial threat to Midjourney if the studio can catalog extensive instances of infringing titles. Of equal importance is injunctive relief. An order to delete the training data, or to alter how the model is trained and marketed, would send shock waves through the A.I. industry, which often depends on the same kind of sourced datasets.
Discovery could be pivotal. Courts may insist on transparency as to the exact datasets employed, flags to filter out copyrighted works, and guardrails to stop outputs that replicate the protected characters. In earlier AI cases, judges have considered whether plaintiffs could establish substantial similarity between outputs and protected elements, as well as whether the copying took place in unauthorized ways in the course of training.
Precedents and policy signals
Early decisions in artist-led lawsuits against A.I. image companies have been mixed: Some claims have been curtailed and others, including charges over the removal of copyright management information, have survived. And in the meantime, the U.S. Copyright Office has doubled down on the idea that the works “authored” by AI without enough human control are not amenable to registration, complicating potential insurance strategies based on attempts to commercialize AI-only outputs that traverses protected character territory.
Regulators and courts have yet to resolve the core question: is mass, unlicensed training using copyrighted media legal under a fair use claim? If Warner Bros. Discovery wins, perhaps studio lawyers can use it to demand licenses or make changes to models. If Midjourney wins, AI developers will receive affirmation for the belief that learning from public data is legal, even when outputs genlock to famous IP.
Either way, the case ultimately comes down to a straightforward question with implications for an entire industry: Who reigns supreme over the value of recognizable characters in the age of generative tools — the rights holder who constructed the franchise, or the model capable of summoning its likeness in seconds?