Grammarly launched a feature in August 2025 that impersonates dead historians. David Abulafia died fewer than two months before the company started using his life’s work to train an AI that speaks in his voice. This isn’t a bug. It’s the business model. The company scraped publicly available writing from scholars, journalists, and authorsโwithout askingโand turned it into “Expert Review,” a feature that frames AI suggestions as coming “from the perspective” of real people. Users think they’re getting advice from Pulitzer winners and tenured professors. They’re getting a language model trained on scraped text.
The gap between what users experience and what’s actually happening is the entire problem.
Grammarly’s ‘expert reviews’ don’t involve a single expert
Zero scholars, journalists, or writers consented to having their work scraped and turned into AI feedback. The feature ran for seven months before The Verge discovered it was impersonating its own staffโeditor-in-chief Nilay Patel, senior editors David Pierce and Tom Warren, none of whom knew Grammarly was using their names. The list extends to at least five major publications (The Verge, Wired, Bloomberg, The New York Times) plus individual journalists whose bylines became training data.
The David Abulafia case is the most visceral. He was a Cambridge historian who spent decades writing about Mediterranean trade routes. He died. Then Grammarly turned his scholarship into an AI agent that critiques undergraduate essays. Historian Claire E. Aubin called it “among the most cursed” things in academia.
But the feature frames suggestions as coming “from the perspective” of these experts, creating false authority at scaleโpart of a broader pattern of how AI is changing the internet without users realizing what they’ve lost. People believe they’re getting human expertise. They’re not.
The AI gives advice ‘like’ expertsโbut nobody knows if it’s good advice
What does impersonation look like in practice? When a TechCrunch reporter tested the feature, the AI suggested adding “ethical context like Casey Newton,” leveraging anecdotes “for reader alignment” like Kara Swisher, and posing “the bigger accountability question” like Timnit Gebru. The suggestions sound plausible. There’s no evidence they reflect how those writers actually work.
Worse: if the AI gives bad advice, the reputational damage falls on the impersonated expert, not Grammarly. A user who follows “Casey Newton’s” suggestion and gets rejected from a publication will blame Newton, not the AI. Like other AI agents trained on scraped data, Grammarly’s system mimics patterns without understanding contextโor caring about consent.
The feature also raises unresolved copyright questions. Grammarly’s own disclaimer says references to experts are “for informational purposes only and do not indicate any affiliation”โa legal shield that doesn’t change the user experience. As AI-generated content becomes undetectable, the line between human expertise and machine mimicry disappears. Grammarly just erased it entirely.
The illusion is the product.
The backlash is loud, but the consequences are still theoretical
No cease-and-desist letters. No lawsuits. No formal demands for removalโat least not publicly. The outrage is real (academic Twitter exploded, journalists are furious), but Grammarly hasn’t pulled the feature or offered opt-outs. The company also hasn’t released adoption numbers, so we don’t know if users are canceling subscriptions or if this is just elite backlash.
The bigger problem: there’s no evidence the AI advice is actually wrong. It’s just attributed to people who didn’t write it. That’s a trust violation, not a performance failure. And trust violations in AI tools mirror the shadow AI adoption problem: once users learn they’ve been misled, they don’t report itโthey just stop using the product.
Once users learn they’ve been fooled, they don’t come back.
Grammarly’s disclaimer: “for informational purposes only and do not indicate any affiliation with Grammarly or endorsement by those individuals.” Claire E. Aubin’s reaction: “among the most cursed things in academia.” The reader decides which world they want to live inโone where AI can legally impersonate anyone with public work, or one where consent still matters.









Leave a Reply