FindArticles FindArticles
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
FindArticlesFindArticles
Font ResizerAa
Search
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
Follow US
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
FindArticles © 2025. All Rights Reserved.
FindArticles > News > Technology

Google denies using David Greene’s voice in NotebookLM

Gregory Zuckerman
Last updated: February 16, 2026 7:01 pm
By Gregory Zuckerman
Technology
6 Min Read
SHARE

Google has pushed back against a lawsuit from radio personality David Greene alleging the company copied his voice for NotebookLM’s AI-generated “Audio Overviews,” insisting the narrator is a hired professional actor and not modeled on Greene.

Greene, a former co-host of NPR’s Morning Edition and current host of the Left, Right & Center podcast from KCRW, claims the tool mimics his delivery and cadence so closely that listeners could believe he participated. The complaint, reported by major outlets including The Washington Post, accuses Google of exploiting his distinctive voice without consent.

Table of Contents
  • Google says the narrator is a hired actor, not David Greene
  • Inside David Greene’s lawsuit over NotebookLM’s AI voice
  • The legal stakes for AI voices and right of publicity
  • Why NotebookLM is under the microscope for its audio
  • What to watch next as the AI voice lawsuit proceeds
The NotebookLM logo, featuring a stylized black icon resembling stacked arcs next to the text NotebookLM in black, presented on a light gray background with subtle, concentric circular patterns.

At the heart of the dispute is NotebookLM, Google’s AI research assistant that can produce podcast-style summaries of user-provided materials. Its default male narrator drew attention for a public-radio-like tone, which Greene argues rides on the commercial value of his persona.

Google says the narrator is a hired actor, not David Greene

Google flatly rejects the allegation. Company spokesperson José Castañeda told multiple publications that the claim is baseless and that the Audio Overviews voice was recorded by a paid voice actor. Google also maintains that NotebookLM was not trained on Greene’s voice and bears no intentional reference to him.

The company’s defense hinges on provenance: if it can demonstrate a clean chain of title for the actor’s performance and show that product teams did not target Greene’s sound, it undercuts the central premise that the voice is a deliberate imitation intended to trade on his identity.

Inside David Greene’s lawsuit over NotebookLM’s AI voice

Greene’s complaint invokes California’s statutory and common law right of publicity, along with unfair competition claims, arguing that his voice functions as a protected element of his likeness. The filing asserts that Google sought to replicate not just timbre but also his pacing, intonation, and on-air persona to lend credibility to an AI product.

Right of publicity law in California is designed to prevent unauthorized commercial exploitation of a person’s identity. Importantly for AI, courts have recognized that a recognizable voice can be protected even without sampling original recordings. Greene’s framing puts emphasis on audience perception and the commercial context of the product, not merely on the technical source of the audio.

The legal stakes for AI voices and right of publicity

Two widely cited cases loom over this fight. In Midler v. Ford Motor Co., a court found that using a sound-alike to mimic singer Bette Midler’s distinctive voice in an advertisement could violate her rights. In Waits v. Frito-Lay, Tom Waits prevailed on similar grounds. Those precedents turn on whether an ordinary listener would identify the performance as the plaintiff’s and whether the imitation was intentional and commercial.

The Google NotebookLM logo, featuring the colorful Google text above NotebookLM in white, set against a professional dark blue gradient background with subtle geometric patterns.

If Greene can demonstrate that NotebookLM’s narrator leads listeners to believe he endorsed or participated in the product, the Midler/Waits line of cases could bolster his claims. Conversely, if Google proves the voice is a generic performance by an actor and not targeted at evoking Greene specifically, the company gains a strong defense.

The episode echoes a recent flashpoint in AI audio, when OpenAI paused a voice named Sky after public comparisons to actor Scarlett Johansson. Even when companies insist they hired licensed actors, perceived resemblance can trigger legal, reputational, and product risks if consumers conflate a synthesized voice with a celebrity’s identity.

Why NotebookLM is under the microscope for its audio

Audio Overviews are designed to sound like a polished, conversational podcast host guiding users through their documents. That format naturally borrows from well-known broadcast styles. When the delivery mirrors a public-radio cadence, it can feel familiar enough to raise questions about where homage ends and imitation begins—especially in a commercial AI context.

Regulators are also zeroing in on synthetic voice risks, from consumer confusion to impersonation scams. While this case centers on publicity rights rather than fraud, it unfolds amid intensifying scrutiny of voice cloning and growing pressure on tech firms to document consent, label synthetic media, and prove provenance for any voices used in products.

What to watch next as the AI voice lawsuit proceeds

The litigation will likely turn on evidence of intent and recognition: internal product documents, casting records for the actor, and expert analyses of vocal features and listener surveys. Courts often weigh whether the average consumer would attribute the voice to the plaintiff and whether the company sought to capitalize on that association.

Beyond the courtroom, the case highlights a practical roadmap for AI audio teams: hire identifiable voice actors under clear contracts, maintain audit trails for every clip, avoid marketing that hints at celebrity likeness, and provide transparent labeling. Whether Greene prevails or not, the outcome will influence how platforms design, disclose, and defend synthetic voices that sound uncannily familiar.

Gregory Zuckerman
ByGregory Zuckerman
Gregory Zuckerman is a veteran investigative journalist and financial writer with decades of experience covering global markets, investment strategies, and the business personalities shaping them. His writing blends deep reporting with narrative storytelling to uncover the hidden forces behind financial trends and innovations. Over the years, Gregory’s work has earned industry recognition for bringing clarity to complex financial topics, and he continues to focus on long-form journalism that explores hedge funds, private equity, and high-stakes investing.
Latest News
Acer Aspire 14 AI Copilot+ PC Hits $459 At Amazon
Samsung Pulls Galaxy Buds 3 From US Store Ahead Of Unpacked
Few Will See Antarctic Annular Solar Eclipse
Levoit Sprout Air Purifier Drops 36% at Amazon
EcoFlow River 2 Max Drops 43% For Presidents Day
Apple sets next showcase events across three cities
Twitter Outage Triggers Widespread Reports
Garmin Presidents’ Day Watch Deals Up to $476 Off
DJI Flip With RC 2 Drops 16% In Record Amazon Sale
Apple Teases Special Experience in New York, London, and Shanghai
Germany Sees DDR5 Prices Hold Steady Except for SODIMM
Pentagon Weighs Dropping Anthropic Over Claude Limits
FindArticles
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
  • Corrections Policy
  • Diversity & Inclusion Statement
  • Diversity in Our Team
  • Editorial Guidelines
  • Feedback & Editorial Contact Policy
FindArticles © 2025. All Rights Reserved.