Grammarly has taken its AI-powered Expert Review tool offline after revelations that it generated feedback under the names of real writers and academics — including living authors and deceased cultural figures — without their consent. The move comes as the company faces a class action lawsuit alleging unauthorized commercial use of writers’ identities.
The controversy highlights a fast-emerging fault line in generative AI: when systems don’t just learn from public content, but begin to simulate a person’s voice, name, and authority. For a platform used by tens of millions, the stakes extend beyond reputational damage into legal risk.
How the Feature Worked — And Why It Sparked Outrage
Launched last year as part of a suite of AI “agents,” Expert Review promised substantive feedback grounded in the work of named subject-matter experts. Marketing copy, later archived by the Internet Archive’s Wayback Machine, said the agent drew on “insights from subject-matter experts and trusted publications,” and let users select specific authors to shape the advice they received.
In practice, as first reported by tech journalists testing the product, the system presented AI-generated comments attributed to real people — from bestselling authors such as Stephen King to scholars like bell hooks — blending a general disclaimer that no endorsement was implied with feature descriptions that suggested expert-derived guidance. That tension proved combustible. Writers and academics publicly objected to having their names presented as the voice of machine-written feedback they never reviewed, authorized, or in some cases could not possibly have seen.
Critics called the setup “exploitative” and “misleading,” arguing it invited users to trust advice precisely because it appeared tied to familiar names. The initial plan to let individuals email the company to opt out only intensified the backlash, since many affected people would not even know their names were being used unless a user happened to notice and tell them. The approach also offered no clear remedy for deceased figures such as bell hooks or astronomer Carl Sagan, whose legacies were invoked without the possibility of consent.
Company Response and a Swift Retreat from Backlash
Following days of criticism from authors, editors, and academics, a company executive acknowledged the concerns and apologized in a public post, saying the agent had “misrepresented” experts’ voices. Grammarly said it would “reimagine” the feature with a model that gives experts meaningful control over whether and how they are represented, and disabled Expert Review while it rethinks the design.
That promise suggests any future iteration may move from an opt-out to an opt-in or licensing-based framework, where named contributors can set terms or decline participation altogether. It’s a familiar pivot for AI companies facing identity and attribution concerns: emphasize transparency, secure explicit permissions, and build compensation or control mechanisms for human contributors.
Class Action Targets Unauthorized Use of Names
The legal challenge arrived quickly. Journalist Julia Angwin filed a class action in federal court in New York, alleging that Grammarly’s feature used her identity without consent for commercial purposes. Her counsel at Peter Romer-Friedman Law PLLC framed the case squarely as a right-of-publicity claim, noting that New York law has long prohibited using a person’s name for advertising or trade without permission.
Legal scholars point out that New York strengthened its publicity protections in 2021 by adding postmortem rights for certain deceased individuals, a move that could be relevant when products invoke the identities of late authors. While the precise applicability will turn on the facts, the lawsuit seeks damages and an injunction to block any future use of writers’ names without consent, a remedy that would force product redesign even if monetary exposure proves limited.
A Broader Reckoning for AI Attribution and Identity
Expert Review’s implosion lands amid a broader industry clash over attribution, licensing, and impersonation. News organizations and authors’ groups have already sued AI developers over training data and alleged derivative uses, and courts are beginning to sort out where fair use ends and appropriation begins. Simulating the aura of a specific person — name, reputation, and implied endorsement — is an even riskier frontier than generic style mimicry.
For a platform that has reported more than 30 million daily users and widespread enterprise adoption, the episode underscores a simple product rule that AI does not obviate: if a feature’s trust signal is a human name, that human needs a say. Expect heightened scrutiny of any AI tool that assigns real-world bylines, likenesses, or expert personas to generated output, as regulators and courts draw firmer lines between inspiration, attribution, and impersonation.
What to Watch Next as Identity Claims Hit AI Tools
Key questions now include whether Grammarly commits to an explicit opt-in program for named experts, whether compensation or co-branding becomes part of any relaunch, and how the company proposes to handle estates of deceased writers. The trajectory of the Angwin lawsuit will also be closely watched, as a clear ruling on identity-based AI attributions could set a template for future claims across the industry.
The takeaway for AI builders is already clear: disclaimers are not a substitute for consent, and brand trust erodes quickly when automation wears a borrowed human face.