You can’t libel the dead. That well-established legal reality has now collided with a novel technological one: The same consumer-grade AIs that can reproduce anyone’s face and voice with astonishing naturalism also enable users to efficiently create photo-real simulacra of the deceased. The result is a loophole large enough to drive a cultural reckoning through: Just because the dead can’t sue for libel doesn’t mean that turning them into puppets is harmless, or ethical, or without consequence.
We’re already seeing the rawness of that wound. Robin Williams’ daughter Zelda pleads with fans to stop sending her AI-generated renditions of her father. Her answer crystallized the silent injury that the law does not capture: grief, dignity and coherent legacy are not abstract legal concepts — they are human stakes.

What The Law Says And What It Doesn’t Cover Today
Defamation law protects the living; it does not extend beyond the grave. As the Student Press Law Center reports, libel claims die with the person (most privacy torts do also). There are various data protection laws, such as the GDPR (GDPR full form can be found on https://www.allfullform.com), which apply to individuals who are still alive. That leaves a huge gulf where AI reanimates a public figure to hawk a product, own up to criminal behavior or express opinions they never held — with no easy legal result for families or the public.
There are some guardrails. The “right of publicity” can go on after a person dies in most U.S. states, limiting commercial use of an individual’s name, likeness or voice. While California has long protected celebrity likenesses for decades after death, New York introduced what was essentially a registerable postmortem right with particular avenues of enforcement outside the estate, and well-known poster children in states like Tennessee and Indiana exist to cast an extraordinarily broad umbrella over local icons. But those laws are inconsistent, studded with exceptions and often restricted to commercial or advertising purposes.
The Power And Peril Of Postmortem Rights In Media
Postmortem publicity rights have facilitated consensual, sunlit uses: The Tupac hologram relied on estate cooperation; Lucasfilm worked with the rightsholders to digitally re-create Peter Cushing for a “Star Wars” movie. But even sanctioned uses can be fraught — an artificial-intelligence track narrated by Anthony Bourdain this week provoked a vocal backlash from both fans and peers who argued that, whatever the legalities of the maneuver, the soul of the artist was being repurposed without authentic control.
Most importantly, these protections skip over many deepfakes. Noncommercial material labeled as “satire,” historical rethinking or fan art can glide through publicity laws while still misleading fans and distressing families. In that gray place, we need to let ethics — not the law — take center stage.
Platforms Are Creating The Rules As They Go In Real Time
AI video tools are outpacing policy.
One well-known model forbids generating living people without permission but permits deceased individuals, thus shifting its responsibility toward treating dead people as a class of content rather than real living persons whose names and faces are still meaningful. Not surprisingly, social media feeds were soon crammed with the eerie likenesses of historical leaders and deceased celebrities within days of their release.

Rights groups and industry bodies are pushing back. The Motion Picture Association notes that copyright and performance rights are still in play even when tools are fresh. Sensity AI has recorded explosive growth of deepfake creation, with non-consensual use being the primary category for early datasets. And although some makers play up safeguards, there are fewer guardrails across others — making it easy to generate sexualized or defamatory fakes, conduct that advocacy organizations like the Electronic Frontier Foundation and the Electronic Privacy Information Center say calls for stronger, harmonized standards.
Harm Beyond Law, Grief, And Misinformation
At least three harms ought to come within the law’s purview but routinely do not. There is the pain to families, first: Convincing replicas can reopen grief and reduce an individual’s memory to a meme. Second is cultural distortion: Inserting words not spoken by the departed manipulates collective memory and warps the way history is taught and shared. Third is the risk of misinformation: real-seeming fabrications can be used to launder propaganda, as cautioned by researchers at NIST and other think tanks devoted to studying mis- and disinformation.
Watermarks and provenance tools work, but they’re not panaceas. The C2PA standard adds support from major media and tech companies, which can attach cryptographic “nutrition labels” that detail how a piece of media was created. That’s necessary infrastructure, but pointing it out does little good for the person who is misrepresented or for the family that must listen to a loved one speak from beyond the grave about views she never held.
Toward A Responsible Standard For AI Deepfakes
We need a basic cultural norm: Don’t create new speech for the dead. When a depiction is required for the purposes of a news report, documentary or educational program, it should be used in perspective and explicitly labeled as ‘reconstruction’, without being unduly influenced by creating new opinions or endorsements. When in doubt, ask the estate — and accept a “no.” Consent should be the starting point of any project, not an afterthought.
Platforms can also institute a default “deceased opt-out” by rights holders, and not just by individual takedown, as well as mandate proof of permission from an estate to commercially use or sponsor the use of a dead person’s likeness. Clear, persistent disclosures — on screen and in metadata — ought to be a requirement for synthetic media that includes real people (living or dead). C2PA-style provenance combined with strong detection and user reporting can introduce some useful frictions.
Politicians can narrow the gaps by updating targeted protections against deceptive or sexualized deepfakes to apply to deceased people, standardizing postmortem publicity rights across states, and affording families a circumscribed right of removal and remedy. The aim is not to criminalize art or satire; it’s to end exploitation that causes real harm in the world and erodes public trust.
“We cannot libel the dead, but we can still use them. We are going to judge ourselves based on whether we utilize our technology to develop empathy and shared truth between people — or if we use it to flatten other people’s lives into mere content, robots in a lens. Choose the former.”