OpenAI’s Sora is drawing a hard line on deepfakes of living members of the public, but still leaving the door open for depictions of deceased celebrities.
The invite-only app is already inundated with slick AI-created videos featuring Michael Jackson, Tupac Shakur, Bob Ross, Malcolm X, and Bruce Lee, among others, as well as a host of famous characters. Every Sora video is watermarked, and the company says that living public figures can be included by design—only with direct permission as part of a Cameo-like feature.
The carve-out places Sora at the center of a thorny debate: where consent, postmortem rights, and platform responsibility meet the viral attraction of hyperreal generative video.
What the ban covers for living public figures
OpenAI presents its policy against unauthorized deepfakes of living public figures as a safety issue, accompanied by a consent mechanism that it has implemented in conjunction with Cameo where people can opt in. In real terms, that means you can’t create a current politician or pop star without their signoff, but dead people and made-up IP are fair game.
Among others, users have published clips that don’t just resemble a person’s face, but also voices and mannerisms that feel uncannily plausible. Some videos feature familiar songs, suggesting a second minefield around music licensing. The result is a feed that sounds less like Miss Cleo’s magic phone and more like an anthropological time capsule—one, mind you, generated by a smart AI model in seconds.
Why depictions of dead celebrities remain an exception
Even if depictions of deceased figures are permitted once you have gained consent from the estate, this doesn’t mean that the legal terrain is a simple one. Many jurisdictions allow rights of publicity to survive death: California with its own Celebrity Rights Act and New York with postmortem protections both give estates the right to police, for commercial purposes, a dead person’s name, image, and likeness. That leaves platforms and creators on the hook if a clip seems like advertising (without necessarily appearing to be so), incorporates locked-down trademarks, or suggests an endorsement.
Estates have a track record of enforcing aggressively. Families behind legends like Michael Jackson and Marilyn Monroe have fought again and again for rights to likenesses and merchandise. If companies try to charge for Sora, or if they repurpose content from it, in brand partnerships or whatever other ways people commercialize inside the game world of Fortnite, you will see legal challenges.
Misinformation Risks Despite Watermarks
All Sora videos are watermarked now, says OpenAI—a minimum basic defense that at least is better than nothing, although it’s easy to blunt with cropping or re-encoding or just re-uploading. Researchers and digital-rights advocates, including the Electronic Frontier Foundation, have long cautioned that provenance labels alone aren’t a silver bullet without persistent metadata and cross-platform enforcement.
Public anxiety is already elevated. The Reuters Institute has found that most people are worried about being able to distinguish real material from AI-generated representations, a perceptual hurdle made only more daunting when hyperreal videos gain traction without warning. And the danger is more than theoretical: audio deepfakes have pretended to be political leaders and duped call recipients, driving regulators including the FCC and consumer-protection agencies to fight back against AI-powered robocalls. Video that can appear to be genuine at first glance makes the same problem worse.
Copyright and trademark headwinds for AI-generated clips
Allowing users to create Nintendo’s Mario or Nickelodeon’s SpongeBob, for instance, opens the door to classic copyright and trademark liability. Big companies with hard-liners in their IP enforcement divisions probably won’t see mass distribution of unlicensed character likenesses, human or not, as permissible—not even if you specify the character as being “AI-generated.” Watch for takedowns and possibly test cases over the point at which transformative fair use becomes infringement.
Music rights are a second layer. Recent industry moves, such as lawsuits by the Recording Industry Association of America against AI music startups, illustrate how rapidly labels act when models duplicate signature sounds or rely on catalog recordings without consent. If Sora clips include clear vocals or unlicensed tracks, there’s potential for claims from a number of angles against both creators and platforms.
How Sora can reinforce guardrails for safer creation
Three levers are the most important: consent, provenance, and enforcement. Consent should be granular and auditable, especially in matters involving public figures and estates. Provenance needs to be more than visible watermarks; it should include tamper-resistant metadata aligning with the C2PA standard sponsored by the likes of Adobe, Microsoft, the BBC, and others; plus visible labels that persist in reposts.
Policing needs to exist on both the front and back ends. That is, timely estate verification, real IP takedowns that stick across mirrors, and at least a little friction or rate limits when prompts are heading in the general direction of impersonation. Platforms like YouTube need synthetic media disclaimers and impersonation removals; VKontakte will be compared against the best such practices as they exist today, at the very least.
Sora’s bottom line on deepfakes, consent, and risk
OpenAI’s divide-and-conquer policy—ban living public figure deepfakes; permit dead celebrity replicas—values consent without completely abandoning cultural remix. But the exemption invites court challenges from estates and rights holders, and it raises the danger that casual viewers might confuse slick AI fiction and fact. If Sora aims to be the shopfront for generative video, it needs consent workflows estates trust, provenance that travels with the clip, and clear rules creators can follow without requiring them to employ a lawyer.