David Greene, the former host of NPR’s Morning Edition and current moderator of KCRW’s Left, Right & Center, has filed a lawsuit against Google alleging that a male podcast voice in the company’s NotebookLM product impermissibly imitates his distinctive vocal style. The dispute thrusts a high-profile public radio voice into one of the thorniest questions in AI: when does mimicking a person’s sound cross from inspiration into unlawful appropriation?
What David Greene Says Is at Stake in the Lawsuit
According to reporting by The Washington Post, Greene became concerned after friends and colleagues flagged the uncanny resemblance between his delivery and NotebookLM’s male “Audio Overviews” host. He contends the cadence, pacing, and even hesitations match his on-air persona—an identity he has cultivated over years of national broadcasting. For a journalist whose livelihood hinges on a recognizable voice, the allegation is more than aesthetic; it cuts to professional identity, reputation, and the commercial value attached to his sound.

Greene’s complaint lands as AI-generated narration becomes a routine feature across media platforms. Voice style—distinct from literal recordings—has long been a signature of broadcasters, and he is testing how far those protections extend when machine learning recreates the “feel” rather than the file.
Google’s Response And How NotebookLM Works
Google has rejected the accusation, telling the Post that the NotebookLM voice at issue is based on a paid professional actor and not derived from Greene. NotebookLM packages users’ notes, transcripts, and documents into digestible formats and, in a popular feature called Audio Overviews, synthesizes scripted conversations between AI hosts to summarize source material. The tool is designed to sound natural and conversational—more like a smart audio briefing than a robotic readout.
Why would a system trained to be natural sound like Greene? Industry veterans note that broadcast-style delivery gravitates toward similar rhythms: clear diction, measured pauses, and gentle emphasis. If a hired actor was directed to deliver a “public radio” tone, a resemblance to widely known voices can emerge even without explicit cloning. Proving the line between inspiration and imitation is the legal and technical challenge ahead.
The Legal Terrain For AI Voice Imitation
U.S. courts have recognized that a person’s distinctive voice can be protected even when no original recording is used. Two oft-cited cases—Midler v. Ford Motor Co. and Waits v. Frito-Lay—held that hiring soundalike performers to evoke a celebrity’s voice in ads could violate rights of publicity and mislead consumers. Those rulings turned on the commercial exploitation of a recognizable vocal signature.
How those precedents apply to generative AI will be central here. State right-of-publicity laws differ, but California’s is particularly robust, and New York has added protections against unauthorized “digital replicas.” Plaintiffs in AI voice cases may also raise false endorsement claims under the Lanham Act if a synthetic voice suggests sponsorship or approval. To succeed, Greene would need to show not merely similarity but unlawful appropriation that harms his economic interests or confuses listeners about his involvement.

Industry Flashpoints And Prior Voice Disputes
High-profile skirmishes over AI voices are mounting. OpenAI withdrew a ChatGPT voice after actress Scarlett Johansson complained it sounded like her, underscoring the reputational risk even when companies say they did not target a specific individual. Earlier, TikTok settled a suit from voice actor Bev Standing after her voice allegedly appeared in text-to-speech features without consent. These incidents, along with provisions on AI consent and compensation negotiated in recent entertainment union agreements, reflect fast-shifting norms around digital replicas.
Technically, voice synthesis requires little source material. Research from Microsoft and others has shown that short samples can capture a speaker’s timbre closely enough to fool casual listeners. As models learn prosody and phrase-level patterns, they do not need to copy words verbatim to evoke a personality—precisely the zone Greene’s suit aims to delineate.
What To Watch Next In The David Greene Google Case
The immediate questions are whether a court finds the NotebookLM voice sufficiently similar to Greene’s and, if so, whether that similarity is legally actionable. Remedies could range from injunctive relief that forces Google to swap or retrain the voice, to damages, to disclosures clarifying that the host is synthetic and not affiliated with Greene. Even absent a ruling, companies are likely to accelerate safeguards: contractual attestations from voice actors, provenance tags for synthetic audio, opt-out registries for public figures, and brand safety audits to avoid soundalike risks.
Regulators are circling as well. The Federal Trade Commission has warned that AI-driven impersonation and deceptive deepfakes may trigger enforcement, and European rules are moving toward mandatory labeling for synthetic media in certain contexts. If Greene prevails or even pressures a settlement, expect a new informal standard in tech: “no plausible confusion” voice policies alongside the now-familiar bans on explicit cloning.
For millions who know Greene’s measured delivery from morning commutes, the case is more than a tech dust-up; it’s a referendum on whether a recognizable voice remains a person’s own in the age of generative audio. However it resolves, the suit will serve as a blueprint for how newsrooms, platforms, and creators navigate the sound of authenticity in AI-driven media.
