Google has pushed back against a lawsuit from radio personality David Greene alleging the company copied his voice for NotebookLM’s AI-generated “Audio Overviews,” insisting the narrator is a hired professional actor and not modeled on Greene.
Greene, a former co-host of NPR’s Morning Edition and current host of the Left, Right & Center podcast from KCRW, claims the tool mimics his delivery and cadence so closely that listeners could believe he participated. The complaint, reported by major outlets including The Washington Post, accuses Google of exploiting his distinctive voice without consent.
At the heart of the dispute is NotebookLM, Google’s AI research assistant that can produce podcast-style summaries of user-provided materials. Its default male narrator drew attention for a public-radio-like tone, which Greene argues rides on the commercial value of his persona.
Google says the narrator is a hired actor, not David Greene
Google flatly rejects the allegation. Company spokesperson José Castañeda told multiple publications that the claim is baseless and that the Audio Overviews voice was recorded by a paid voice actor. Google also maintains that NotebookLM was not trained on Greene’s voice and bears no intentional reference to him.
The company’s defense hinges on provenance: if it can demonstrate a clean chain of title for the actor’s performance and show that product teams did not target Greene’s sound, it undercuts the central premise that the voice is a deliberate imitation intended to trade on his identity.
Inside David Greene’s lawsuit over NotebookLM’s AI voice
Greene’s complaint invokes California’s statutory and common law right of publicity, along with unfair competition claims, arguing that his voice functions as a protected element of his likeness. The filing asserts that Google sought to replicate not just timbre but also his pacing, intonation, and on-air persona to lend credibility to an AI product.
Right of publicity law in California is designed to prevent unauthorized commercial exploitation of a person’s identity. Importantly for AI, courts have recognized that a recognizable voice can be protected even without sampling original recordings. Greene’s framing puts emphasis on audience perception and the commercial context of the product, not merely on the technical source of the audio.
The legal stakes for AI voices and right of publicity
Two widely cited cases loom over this fight. In Midler v. Ford Motor Co., a court found that using a sound-alike to mimic singer Bette Midler’s distinctive voice in an advertisement could violate her rights. In Waits v. Frito-Lay, Tom Waits prevailed on similar grounds. Those precedents turn on whether an ordinary listener would identify the performance as the plaintiff’s and whether the imitation was intentional and commercial.
If Greene can demonstrate that NotebookLM’s narrator leads listeners to believe he endorsed or participated in the product, the Midler/Waits line of cases could bolster his claims. Conversely, if Google proves the voice is a generic performance by an actor and not targeted at evoking Greene specifically, the company gains a strong defense.
The episode echoes a recent flashpoint in AI audio, when OpenAI paused a voice named Sky after public comparisons to actor Scarlett Johansson. Even when companies insist they hired licensed actors, perceived resemblance can trigger legal, reputational, and product risks if consumers conflate a synthesized voice with a celebrity’s identity.
Why NotebookLM is under the microscope for its audio
Audio Overviews are designed to sound like a polished, conversational podcast host guiding users through their documents. That format naturally borrows from well-known broadcast styles. When the delivery mirrors a public-radio cadence, it can feel familiar enough to raise questions about where homage ends and imitation begins—especially in a commercial AI context.
Regulators are also zeroing in on synthetic voice risks, from consumer confusion to impersonation scams. While this case centers on publicity rights rather than fraud, it unfolds amid intensifying scrutiny of voice cloning and growing pressure on tech firms to document consent, label synthetic media, and prove provenance for any voices used in products.
What to watch next as the AI voice lawsuit proceeds
The litigation will likely turn on evidence of intent and recognition: internal product documents, casting records for the actor, and expert analyses of vocal features and listener surveys. Courts often weigh whether the average consumer would attribute the voice to the plaintiff and whether the company sought to capitalize on that association.
Beyond the courtroom, the case highlights a practical roadmap for AI audio teams: hire identifiable voice actors under clear contracts, maintain audit trails for every clip, avoid marketing that hints at celebrity likeness, and provide transparent labeling. Whether Greene prevails or not, the outcome will influence how platforms design, disclose, and defend synthetic voices that sound uncannily familiar.