Grammarly’s new Expert Review feature promises feedback “from the perspective” of luminaries and working journalists. The catch is that those named voices aren’t actually involved, raising fresh questions about consent, transparency, and whether AI products are drifting into synthetic endorsements dressed up as authority.
How Grammarly’s Expert Review Feature Actually Works
Introduced as part of Grammarly’s AI expansion, Expert Review appears in the assistant’s sidebar and frames revision suggestions as if inspired by renowned authors and public intellectuals, living or dead. Reporters who tested the tool also saw feedback framed through the names of active tech journalists from major outlets.
Grammarly’s guidance states that references to experts are informational and do not indicate affiliation or endorsement. A company representative has said the names appear because these figures’ works are publicly available and widely cited. In practice, however, the interface can look a lot like borrowed credibility—users see advice paired with a famous byline, while the actual humans had no say.
That distinction matters. As one historian told a national magazine that examined the feature, calling this “expert review” is misleading if no experts participate. It’s an accuracy problem wrapped in a marketing problem.
Consent concerns and potential legal exposure for AI tools
The risk isn’t hypothetical. U.S. right of publicity laws protect individuals against unauthorized commercial use of their name or persona. Courts have held that even evoking someone’s identity without explicit naming can cross a line, as in landmark cases involving Bette Midler and Vanna White. A product experience that repeatedly associates advice with a recognizable person—particularly one still working—edges toward endorsement territory.
Consumer protection rules add more friction. The Federal Trade Commission’s updated Endorsement Guides caution that disclaimers don’t cure deception if the overall presentation implies someone’s approval or involvement. If users reasonably infer that a journalist or author shaped the suggestion they’re reading, a fine print notice may not be enough.
Globally, regulation is moving the same direction. The European Union’s AI Act requires transparency for AI-generated content and is sharpening expectations around the use of individuals’ likenesses and reputations. Platforms that lean on real people’s names to confer trust without permission could find themselves out of step with emerging norms and, eventually, enforcement.
Synthetic authority effects on credibility and user trust
Authority bias is powerful: behavioral research shows people routinely overweight guidance from named experts compared to anonymous advice. That bias collides with well-known AI pitfalls—hallucinations, overconfident phrasing, and style mimicry—creating a recipe for misplaced trust. When an AI suggests “what Orwell would say” or “how a reporter would revise this,” users may assume rigor and ethics that simply are not there.
Scale magnifies the stakes. Grammarly has claimed more than 30 million daily users and tens of thousands of business teams. Even a small share misunderstanding Expert Review as real, name-backed critique could propagate misinformation or sloppy sourcing across classrooms, newsrooms, and corporate communications.
There’s also collateral damage. Journalists and authors cultivate reputations over years. Having their names inferred as endorsing AI-synthesized guidance—especially if the output is wrong or biased—risks reputational harm and audience confusion. Media organizations already battling AI-driven attribution issues don’t need another vector for phantom bylines.
What real expert review should mean for AI writing tools
If the goal is expert-caliber feedback, there are cleaner paths. Platforms can commission verified contributors, compensate them, and clearly label when advice is authored by a named expert versus generated by a model. Where names are invoked, licensing or explicit participation should be the floor, not an afterthought.
Short of formal partnerships, tools can ground suggestions in citations to specific works, not personas: “This guidance is an AI summary derived from chapters X and Y of [book],” paired with prominent disclosures. Some education platforms already route AI drafts through teacher-in-the-loop workflows; safety researchers do similar “red teaming” with domain specialists. The lesson is consistent: humans earn trust, and labels must match reality.
The bottom line on consent, transparency, and authority
Expert Review, as currently framed, reads like an attempt to launder authority through famous names without their consent. Disclaimers help but don’t fix the core mismatch between what the interface implies and what’s happening under the hood. If Grammarly wants the halo of expertise, it should bring real experts into the loop—or drop the name-dropping. In the AI era, clarity about who is speaking isn’t a feature; it’s the product.