A prominent investigative journalist has filed a class action accusing Grammarly of turning living writers and public figures into AI “editors” without permission, escalating a fast-moving fight over consent and commercial use of identity in generative tools. The complaint targets Grammarly’s short-lived Expert Review feature, which let paying users summon editorial feedback in the style and name of real people, from bestselling authors to tech critics—none of whom, the suit alleges, agreed to be featured.
What the lawsuit alleges about Grammarly’s AI editors
Journalist Julia Angwin filed the proposed class action against Superhuman, the parent company of Grammarly, claiming violations of privacy and publicity rights for herself and other individuals she says were impersonated. The filing asserts that Grammarly marketed well-known names as on-demand reviewers to bolster a $144-per-year subscription tier, implying endorsement and licensing that did not exist. Plaintiffs seek to stop the practice, recover damages, and force clear disclosures around the use of names and likenesses in AI outputs.
The alleged roster extended beyond authors to scientists and media personalities. That breadth, the suit argues, heightened confusion for users and potential reputational harm for those drafted into service as AI personas. Angwin, known for her privacy reporting, contends the product exploited precisely the kinds of identity and trust signals she has spent years warning the public about.
How Expert Review worked, and why the launch backfired
Expert Review invited subscribers to pick a named “expert” to critique their writing, then generated feedback in that person’s voice. Promoted examples included household names such as Stephen King and Carl Sagan, as well as journalists and AI ethicists like Kara Swisher and Timnit Gebru. Early testers reported that the guidance often read like generic coaching rather than the distinctive insight users might expect from a marquee byline.
The mismatch between famous identities and boilerplate comments fueled criticism that the feature traded on reputation while delivering little substance. Several of the featured individuals publicly objected to their inclusion, and within days, Superhuman CEO Shishir Mehrotra said the feature was disabled. He apologized for the rollout while continuing to pitch the concept as a bridge between experts and users, a stance likely to feature prominently in the company’s legal defense.
The legal stakes around identity, endorsement, and AI
At its core, the case tests whether using a person’s name and implied voice in a commercial AI product violates the right of publicity and false endorsement laws. At least 35 U.S. states recognize the right of publicity by statute or common law, including California and New York. California’s statute allows for statutory damages and injunctive relief for unauthorized commercial use of a name or likeness; New York’s law similarly guards against deceptive use of a person’s identity.
There is case law that cuts close to AI impersonations. In Midler v. Ford, a federal appeals court held that imitating singer Bette Midler’s voice in an advertisement without consent was actionable. In White v. Samsung, a court found that evoking Vanna White’s persona via a robot in a commercial could violate her publicity rights. Plaintiffs may also claim false endorsement under the Lanham Act’s Section 43(a), arguing that the feature misled consumers into believing the “experts” endorsed or participated in the product.
Regulators, too, have signaled concern. The Federal Trade Commission’s updated Endorsement Guides emphasize that marketers cannot fabricate or misrepresent endorsements, a principle that extends to avatars and virtual influencers. Legal scholars such as Jennifer Rothman and Eric Goldman have long noted that the right of publicity covers more than celebrity photos—it can include names, voices, and other signifiers of identity when used to sell a product.
A broader consent reckoning for generative AI products
While recent AI lawsuits have centered on training data and copyright—think newsroom challenges to AI models or artists’ claims against image generators—this dispute zeroes in on something more visceral: real-time impersonation of identifiable people in a paid product. It sits alongside high-profile flare-ups over AI voice clones and synthetic endorsements, where the harm is not just copying style but appropriating hard-earned reputation.
The scale matters. Grammarly has publicly touted a massive user base over the years, embedding itself in classrooms and workplaces. Enterprise buyers scrutinizing generative features will now weigh not only data security and hallucinations but also endorsement risks: Was this “expert” approved? Is the identity licensed, clearly labeled as fictional, or opt-in? For many organizations, anything that looks like unauthorized endorsement is an immediate red flag.
What to watch next as the lawsuit moves forward
With Expert Review paused, the immediate question is whether Superhuman and Grammarly pivot to an explicit opt-in model with paid licensing, or retire the concept entirely. Courts could force changes through an injunction, or the parties could reach a settlement that establishes consent and labeling standards for persona-based AI tools.
Either way, the case is poised to shape how AI companies handle names, voices, and styles moving forward. Clearer rules could emerge: no real identities without documented consent; no suggestive branding that implies endorsement; and stronger disclosures when outputs emulate living people. For creators wary of being turned into automated assistants, the lawsuit may become the test that draws a bright line between inspiration and impersonation.