OpenAI is about to release opt-in, rights-holder controls in Sora, the company’s new text-to-video application, in a move that reveals the organization’s harder line on intellectual property within generative media. The company’s CEO, Sam Altman, laid out plans to allow the owners of characters and other copyrighted assets to set fine-grained permissions for when those elements can be seen in user-generated videos.
The move reframes Sora’s relationship with entertainment IP at a time when the app’s early buzz — invite-only access, skyrocketing interest in the App Store and viral “cameo” clips that insert a user into scenes by picking up his or her likeness — has butted heads against users’ desire to remix beloved studio characters. Altman described the new format as “interactive fan fiction,” but said participation by rights holders would be explicit and opt-in, rather than assumed.
What Granular Really Means for Rights Holders
Granular controls imply something more than a simple binary yes/no switch. Anticipate per-character permissions, usage scopes (parody permitted, endorsements not), content ratings and guardrails against political or adult contexts. Studios could also set rules for co-appearances (which characters can share screen time) or establish geographic windows or rights around monetization associated with particular franchises.
That almost certainly requires an IP registry, with a layer of identity for IP — a way to connect some rights holder’s policies to semantic concepts within the model. That can mean anything from embedding-based identification of restricted characters, to baked-in blocks for disallowed prompts, to automated licensing checks at render time. It resembles how music services iron out composition and recording rights before hitting play, but is tuned to generative outputs, not static catalogs.
For user likenesses, Sora already has an opt-in system for “cameos,” where people would upload biometric data to create their digital doubles. Expanding that thinking out to fictional characters and branded property is consistent with prevailing right-of-publicity norms, and reduces spillover risk into regulated sectors like Illinois’ Biometric Information Privacy Act when real people are implicated.
Why OpenAI’s Sora Strategy on IP Controls Is Changing
Entertainment partners were originally advised, according to earlier reporting by The Wall Street Journal, to decline if they did not want their characters featured in Sora videos. Opting in reverses that default positioning, and gets the product close to how studios already license IP: permission first, terms attached, exceptions explicit.
It also comes in the face of a larger push. The EU’s AI Act tightens up rights holders’ text-and-data-mining opt-outs and transparency requirements. In the US, recent labor agreements that SAG-AFTRA and the WGA struck were focused on consent and compensation around digital replicas. At the same time, lawsuits over training data and style mimicry are putting platforms on notice that “use now, negotiate later” is a recipe for expensive conflict.
OpenAI has already tested licensing models — deals with the Associated Press and Shutterstock show that the company is willing to pay for access. If it wants to court the studios without shutting off access for creators, the next step is a generation-time permissions layer for Sora.
Monetization and Revenue Sharing for Sora Videos
Altman also hinted that video production would have to have a potential path to sustainability, including sharing revenue with rights holders. One model that fits is YouTube’s Content ID, which re-routes ad revenue to rights holders whenever fan uploads contain audio or video that is copyrighted. In a generative scenario, this split could happen at render time, during transfer, or both (e.g., per-minute fees for generations and an ad rev share if shared).
For creators, there will likely be tiers of access: free or low-cost generations in which IP is unfettered and more premium-tier ones where licensed characters can appear if approved. For studios, dashboards could spit out usage stats — what characters are hot now? What contexts drive the most watch time? — so permissions become market insights rather than passive defensive walls.
The Tough Problems That Remain for Rights Control
Even with opt-in, enforcement is in a bit of disarray. Altman conceded that some out-of-bounds generations will fall through the cracks. Preventing evasion triggers, capturing near-lookalike variants and detecting composite characters are not trivial tasks. Invisible watermarking of output signals and provenance metadata through standards like the C2PA could help identify outputs, but they don’t solve the policy layer on their own.
A second open question is training vs. generation. Opt-in controls lay out what Sora will render, but not necessarily what the model has been able to learn. Regulators and rights holders are likely to want to understand whether restricted assets are excluded from training sets, masked in inference or simply blocked at the last mile. Transparent documentation and third-party audits would help a lot with trust.
Finally, there’s the long tail. Negotiations can be staffed at major studios; they often are not for independent artists and small IP owners. And if OpenAI wants a healthy ecosystem, it’s going to need easy self-serve tooling for how people register assets, set defaults and cash out revenue — things like the way indie musicians onboard to music distributors without needing legal teams.
What This Shift Means for Generative Video Creators
The return of granular, opt-in controls is a sensible reset. They offer to make user experimentation safer, their value to rights holders clearer and OpenAI’s legal minefield a little less crowded. Done right — with strong detection, clear policies and real revenue-sharing for artists — it could transform “interactive fan fiction” from a legal migraine into an authorized format that studios actively program alternate versions of their work for.
The challenge now is operational: matching permissions, payments and provenance at generative pace.
The companies that figure out that stack will establish the rules of engagement for AI-era entertainment.