It is the latest salvo in a simmering fight over how generative artificial intelligence learns from copyrighted content. The appeal came through the Content Overseas Distribution Association, a Tokyo trade group made up of such members as anime studios, game publishers and major media conglomerates.
CODA said its member companies witnessed Sora 2 producing outputs that were similar to Japan-made animation and game images, prompting them to suspect their catalogs had been used in training without consent. The group contends that, under Japan’s copyright regime, prior consent is in principle necessary; and that there is no statutory analogue to an opt-out when particular works are copied or reproduced in outputs.
Why Anime Stakeholders Are Rebelling Against AI Models
Japan’s animation and game IP is a veritable engine of the global economy. The Association of Japanese Animations has said that the market for anime is worth more than 2.7 trillion yen today, with sales being lifted by exports, merchandising and streaming. Blockbuster films like Demon Slayer: Mugen Train, which swept up the equivalent of hundreds of millions of dollars worldwide, are examples of how a single hit can amplify huge licensing value beyond what’s on screen.
That value is anchored in unique visual styles, worldbuilding and rigorous craft. For studios, the risk comes from A.I. models that can reverse-engineer content “in the style of” a signature brand reliably enough to undermine control over derivative uses, depress licensing fees and confuse fans about what’s real. And for gaming publishers, that same risk extends to concept art, cinematics and promotional assets making their way across social platforms very quickly.
And at the intersection of those concerns is Square Enix, custodian of Final Fantasy and Kingdom Hearts. Its characters and worlds are immediately identifiable, and their appearance is aggressively controlled across games, trailers and collectibles. The question of whether or not a popular model could flood the streets with near-lookalikes is more than just dry policy fodder; it’s an existential challenge to the way Japanese IP holders monetize and safeguard their names.
The Legal Fault Lines Over AI Training in Japan
CODA’s case mirrors a broader policy discussion in Japan and elsewhere about whether practicing on copyrighted works without permission is legal. Japan adopted a robust text-and-data-mining exception years ago to promote research and innovation, but rights holders have asked the government to specify how that provision applies to generative models making output material that looks like particular works. Cultural policy councils have been considering reforms as creative industries push for stronger guardrails.
Globally, the battlefield is crowded. News companies have sued A.I. developers over training data and outputs. Authors, artists and stock image libraries have sued to test the question of whether massive ingestion of their work violates copyright or some other right. Early rulings have been mixed, and many cases turn on disclosures around training data, and the technical issue of how closely outputs map back to protected expression.
OpenAI has said broadly that its models are trained on a range of publicly available and licensed data, and it has signed content deals with media and stock providers. It has not, however, fully released Sora’s training corpus. Now that gap is squarely in CODA’s crosshairs: The organization demands explicit consent, clearer transparency and more responsive processes for when members flag infringing outputs.
The Ghibli Position And The Culture Clash
There are few studios that have come to represent handcrafted animation more than Studio Ghibli. Co-founder Hayao Miyazaki has long been vocally opposed to generative processes that circumvent the human hand, a sentiment many across Twitter and beyond echoed when Ghibli-like artificial intelligence clips went viral earlier this year. To Ghibli and its brethren, style mimicry isn’t merely a harmless homage; it’s an easy pass into decades’ worth of hard-earned aesthetic identity.
It’s an irony that OpenAI has named one of the most famous Square Enix characters after Sora, too. But the underlying feud isn’t a joke: When fan-made or AI-assisted videos take on the cadence of Ghibli or the polish of a Square Enix cinematic, it becomes harder to parse at scale what constitutes inspiration and what infringes.
What OpenAI Might Have to Do Next to Address CODA
Operationally, meeting CODA could include the culling of datasets to exclude Japanese catalogs; zeroing out filters on provenance systems that would block uploads of protected footage — for fine-tuning only —; and clearer opt-in workflows for rights holders who do want to license said materials. Technical measures like content credentials and watermark detection can be of assistance but rely on platforms to cooperate with each other — between which they often don’t — and creators to adopt the systems.
There’s also a strategic question. If big Japanese libraries are walled off by OpenAI, Sora 2 may no longer have access to some of the most influential visual languages in animation and games. If it does not, the company faces lawsuits, regulatory attention and backlash in one of the world’s most culturally important media markets. Either direction will reverberate through the way in which AI video products are created, licensed and brought to market.
The Stakes for Games and Global IP in the AI Era
Game publishers like Square Enix depend on pipelines of art, motion capture and cinematics that span several continents. Generative video can have creative applications in studio settings, but unbound external models that learn from and mimic their inputs increase exposure in the context of modding, user-generated content, marketing. Anticipate that publishers will drive toward enterprise AI products trained on licit or first-party data and demand consumer AI platforms revenue-sharing agreements or separating off their franchises.
For now, CODA’s move crystallizes a larger trend: rights holders are no longer waiting for global consensus around AI training. They are asserting themselves, brand by brand and market by market, and insisting that model builders meet them on licensing, transparency and enforcement. How OpenAI engages with Japan’s creative heavyweights will be a sign of how the next round of AI–entertainment negotiations plays out elsewhere in the world.