After weeks of viral mashups (you must’ve seen cutesy pocket monsters zipping into somber biopics and old-school superheroes crashing the arthouse scene), OpenAI says it’s planning to help ensure rightsholders have a say in how their characters materialize within Sora videos. It also raised the possibility of sharing revenue with copyright holders — which suggests that it’s thinking about turning memetic chaos into a regulated, monetizable ecosystem.
What OpenAI Is Promising for Sora IP and Likeness Controls
OpenAI will implement “greater restrictions” for character use, following in the lead of its opt-in likeness policy and layering on more control, CEO Sam Altman said. Plainly, that means rightsholders might set rules like “allow” or “block entirely,” and maybe even with conditions — say down to the character or context or country. Altman also cautioned that it’s an approach that will rapidly evolve, as OpenAI experiments with what actually works at scale.
- What OpenAI Is Promising for Sora IP and Likeness Controls
- The Revenue Share Question for Copyright Holders on Sora
- Control at the Granular Level Will Make or Break Adoption
- Early Pushback From Japan Points to Global Stakes
- Safety Shortfalls Add to Pressure for Stronger Sora Controls
- What To Watch Next as OpenAI Tests Sora IP Controls
The move comes during an increasingly intense period for Sora’s iOS app, where user-generated clips have brought into focus concerns about fair use, parody, and outright infringement. A system that will allow IP owners to dictate when and where their characters can turn up — and in what context! — hopes to turn Sora from a novelty feed into a controlled platform befitting brands and studios.
The Revenue Share Question for Copyright Holders on Sora
OpenAI says it is testing the waters with revenue sharing among copyright holders, but the specifics are fuzzy. The closest model we have to these new rules is the way that YouTube handles copyrighted content, enabled through its Content ID program, which gives rightsholders the ability to block or monetize derivative works; Google estimates it has paid out billions of dollars to partners over the years. Applying such a model to generative video would be gnarly: Whose cryptokitty gets paid when you prompt “Tintin meets Mad Max in the Battle of Stalingrad”?
Expect tinkering with payout attribution, claim hierarchies, and thresholds for “substantial similarity.” And of course publishers and labels need clear audit trails and an appeal process. Creators, meanwhile, will lobby for fair rev-share splits and clear rules that don’t over-block legitimate parody or commentary.
Control at the Granular Level Will Make or Break Adoption
For studios and IP holders, the devil is in the defaults. If character avatars are opt-out, this is the moment YouTube will flood with legal takedowns. If they’re opt-in and there’s also strong enforcement, rightsholders are more likely to license. Some conceivable features: blocklists and allowlists for characters, etc., judgments on violence or sexual content based on usage, campaigns with expiry dates watched by the device clock in geofenced territories, and probably scene-level moderation that intercepts obvious abuses before they render.
Provenance and watermarking are also likely to be essential.
Adopting standards such as the C2PA for content credentials (backed by the likes of Adobe, Microsoft, and the BBC), coupled with invisible watermarking integration, can aid in helping platforms track assets, attribute usage, and reconcile revenue claims further down the line.
Early Pushback From Japan Points to Global Stakes
Altman also cited Japan’s creative influence, but the reaction there makes clear what a tightrope lies ahead. Lawmakers, including Akihisa Shiozaki, have raised legal and policy issues related to the way Sora has dealt with beloved anime characters, suggesting that some franchises have already been curtailed. And since Japan’s content industries are heavyweight exporters, their stance could set norms in Asia and beyond.
For decades, global entertainment companies have battled to preserve the integrity of characters and brand safety. Any belief that Sora allows illicit crossovers might invite regulatory scrutiny similar to what music services endured in the years before licensing deals were signed with labels and publishers.
Safety Shortfalls Add to Pressure for Stronger Sora Controls
OpenAI’s own tests show a nontrivial risk: a 1.6 percent chance that Sora creates sexual deepfakes while working from a person’s likeness, even with guardrails in place. On paper this is a small rate, but at internet scale it becomes more significant and intersects with other harms. Scholars have long found that most deepfakes online are sexualized — one widely cited report by Deeptrace in 2019 estimated around 96 percent — and the victims often endure reputational damage that is impossible to undo.
To win trust, OpenAI will have to pursue layered defenses: better prompts and filters, pre- and post-render checks on sensitive content, expedited takedown protocols, and law-enforcement alliances when necessary. Coordinating with advocacy groups and industry bodies can mold remedies for victims of abuse, at the same time drawing clearer red lines for creators.
What To Watch Next as OpenAI Tests Sora IP Controls
Key signals to track:
- Whether opt-in is the potential default
- How granular its rule sets get with context, tone, and setting
- Whether OpenAI releases transparency reports about claims, blockings, and payouts
- Technical concessions: content credentials, watermark reliability after edits, and model updates that mitigate lookalike outputs
If OpenAI can bring rights owners on board with predictable rules and actual revenue, Sora could be a meaningful distribution channel for licensed IP riffs rather than simply a meme machine.
If not, “Pikachu in Oppenheimer” may be recalled as the moment that Hollywood and Silicon Valley told generative video to grow up — or be regulated into compliance.