Google is facing a new legal challenge over artificial voices. Former NPR anchor David Greene has filed a lawsuit alleging the male narrator in NotebookLM’s Audio Overviews is an unauthorized replica of his voice, effectively letting Google’s AI speak in a style he spent years honing on air. Google disputes the claim, saying the voice is simply a professional actor’s performance.
What Greene Alleges About NotebookLM’s Narrator Voice
Greene, known to millions from his years hosting Morning Edition, argues the NotebookLM voice copies the cadence, pacing, and even the subtle filler tics he worked to reduce but never fully eliminated. In comments reported by The Washington Post, he said the AI sounds “uncannily” like him and could be made to say things he would not endorse or ever say himself.

The complaint asserts Google benefited from a distinctive on-air persona built over decades without seeking permission or offering compensation. It also suggests, without direct proof, that Greene’s extensive broadcast recordings may have been used to train or guide the system. Central to the claim is a right-of-publicity argument: that a recognizable voice—like a face or name—has commercial value that individuals have a right to control.
Google’s Response And How NotebookLM Works
Google has called the lawsuit baseless, maintaining that the male voice in question was recorded by a paid professional actor. The company’s position is that similarity does not equal theft, and that any overlap reflects performance choices rather than copying or training on Greene’s recordings.
NotebookLM’s Audio Overviews feature turns user-provided materials into spoken summaries. While the content is AI-generated, the voice delivering those summaries can be a conventional recorded voice or a synthetic rendering based on an actor’s performance profile. Importantly, a lifelike performance does not require training on a particular person’s voice; skilled actors and modern text-to-speech systems can produce styles that evoke familiar broadcast tones without sampling a specific individual.
The Legal Landscape Around Voice Likeness
U.S. law recognizes a right of publicity that protects a person’s identity from unauthorized commercial exploitation, but the rules vary by state and have rarely been tested against modern AI. Two landmark cases continue to loom large: Bette Midler’s successful suit against Ford for using a sound-alike singer after she declined to appear, and Tom Waits’s case against Frito-Lay over a deliberately imitated gravelly vocal in a commercial. In both, courts found that a distinctive voice can be protected, even if performed by another person.
Greene’s claim may hinge on whether a product’s built-in narrator is more akin to a general user interface—or to an endorsement-laden advertisement. California, New York, and Illinois provide statutory rights of publicity, but there is no single federal standard. If a court finds the NotebookLM voice to be substantially evocative of a well-known broadcaster for commercial advantage, those precedents could matter. If, instead, it’s judged a generic “newsreader” style delivered by an actor without intent to trade on Greene’s identity, Google’s defense strengthens.

Industry Flashpoints And Public Reaction
Voice likeness disputes have become a fault line for AI companies. After a public outcry over a ChatGPT voice that many said recalled Scarlett Johansson, the developer paused that voice and emphasized it had hired a different actor months earlier. Meanwhile, SAG-AFTRA has pressed for explicit consent and pay for synthetic voice uses in entertainment and advertising, reflecting broader labor concerns.
Regulators are also circling. The FCC has ruled that AI-generated voices in robocalls are illegal under existing telephone consumer protection rules, and the FTC has warned companies about deceptive AI impersonation. The FTC’s consumer data show impostor scams are among the top sources of fraud losses, tallying billions of dollars in a recent year—evidence that realistic synthetic voices can carry real-world harms when misused.
For everyday users, the line between a professional “newsreader” delivery and an identifiable broadcaster can feel thin. In audio, signature elements—breath patterns, pauses, emphasis—are branding. That blurring raises stakes for AI product teams: an actor’s neutral delivery could still trigger recognition, even without training on any particular person.
What To Watch Next As Google Faces Voice Lawsuit
The case could turn on discovery. Casting records, direction notes, and technical documentation about how the voice was produced may clarify intent and process. Audio forensics—comparing spectrograms, prosody, and phoneme timing—could inform whether the similarity is coincidental style overlap or a deliberate “sound-alike.”
Regardless of the outcome, the industry trend is clear. Companies are moving toward explicit consent frameworks, voice performer rosters, and clearer disclosures when content is AI-assisted. Policymakers in the U.S. and abroad are weighing transparency requirements for synthetic media, signaling that provenance and permission are fast becoming table stakes.
Greene’s lawsuit doesn’t just ask whether a tech giant crossed a line—it asks courts to redraw the line for the AI era. If his claim proceeds, it may set a reference point for how far an AI narration can lean into a recognizable broadcast style before it becomes someone else’s voice.
