Grammarly's AI 'Expert Editors' Tool Faces Backlash for Unauthorized Voice Cloning of Journalists and Authors
Key Takeaways
- ▸Grammarly created AI-generated 'expert editor' personas based on real journalists, authors, and public figures without obtaining consent or informing them
- ▸The tool relied on mass scraping of published works to train its models, raising copyright and intellectual property concerns
- ▸The generated editing suggestions demonstrate poor quality and lack understanding of actual editorial practices, suggesting the AI training was insufficient
Summary
Grammarly, which rebranded as an AI-focused company last year, launched a controversial tool approximately seven months ago that offers writing reviews purportedly inspired by famous authors and journalists—including living writers—without their consent or knowledge. The tool claims to draw from the work of figures ranging from Stephen King and Neil deGrasse Tyson to contemporary journalists, using what appears to be mass-scraped content fed into large language models. The feature has drawn significant criticism for multiple ethical violations: the company did not notify the individuals whose names and voices were being used, the tool's editing suggestions lack coherent connection to actual editorial practices, and the approach raises serious questions about unauthorized use of people's intellectual property and public personas. The story gained wider attention in March 2026 when a PC Gamer writer discovered their own name being used to represent an 'expert editor' profile without their knowledge or consent.
- This incident exemplifies broader concerns about AI companies appropriating individuals' identities and work for commercial products without permission or compensation
Editorial Opinion
Grammarly's unauthorized cloning of real people's identities for a commercial AI product represents a particularly egregious form of AI misuse that goes beyond typical data scraping concerns. The fact that the tool produces mediocre suggestions while profiting from stolen personas adds insult to injury—if these 'expert editors' were genuinely useful, the ethical violation might at least serve some purpose. This incident should serve as a watershed moment for establishing clear legal and ethical frameworks around AI voice synthesis and identity rights before the technology becomes even more normalized.


