BotBeat
...
← Back

> ▌

GrammarlyGrammarly
POLICY & REGULATIONGrammarly2026-03-13

Grammarly Disables AI Author-Impersonation Tool Following Legal Action and Public Backlash

Key Takeaways

  • ▸Superhuman/Grammarly's Expert Review feature used the names and personas of hundreds of writers without consent to drive subscriptions, leading to a class-action lawsuit alleging damages exceeding $5 million
  • ▸The lawsuit, led by investigative journalist Julia Angwin, has already generated significant momentum with over 40 additional complainants coming forward within 24 hours of filing
  • ▸The company initially defended the feature with an opt-out mechanism rather than seeking explicit consent, which legal experts argue inappropriately shifted the burden onto the impersonated individuals
Source:
Hacker Newshttps://www.bbc.com/news/articles/cx28v08jpe7o↗

Summary

Grammarly has disabled its Expert Review feature, an AI tool that mimicked the writing styles of prominent authors and academics including Stephen King and Carl Sagan, after facing significant backlash and a multi-million dollar class-action lawsuit. The feature, which provided writing feedback "inspired by" famous writers' personas without their consent, was taken down this week following criticism from impersonated individuals who argued their identities were being misappropriated for commercial gain. Investigative journalist Julia Angwin, lead plaintiff in the lawsuit filed in the Southern District of New York, expressed shock at finding her professional identity marketed as a commercial product, describing the AI imitations as poor-quality "sloppergangers" that provided inferior editing advice. CEO Shishir Mehrotra apologized on LinkedIn, acknowledging the tool had "misrepresented" expert voices, though the company initially resisted full removal by offering an opt-out mechanism rather than seeking consent upfront.

  • The impersonated writers criticized not only the ethical violation but also the poor quality of the AI-generated advice, which often made writing worse while falsely attributed to experts

Editorial Opinion

This incident highlights a critical blind spot in how generative AI companies are deploying identity-based features without establishing clear consent frameworks. While Grammarly's apology and feature removal demonstrate some corporate accountability, the fact that the company launched this tool without proactively securing permission from the people being impersonated suggests a troubling approach to AI ethics and intellectual property. The lawsuit's emphasis on the poor quality of the imitations adds another dimension to the concern—not only were identities used without consent, but the tool's mediocre performance risked damaging the reputations of the impersonated experts. This case will likely set important precedent for how AI companies must handle identity, consent, and the monetization of human likeness in the generative AI era.

Generative AIRegulation & PolicyEthics & BiasPrivacy & Data

More from Grammarly

GrammarlyGrammarly
POLICY & REGULATION

Superhuman CEO Addresses Grammarly's AI Impersonation Controversy in Tense Interview

2026-03-23
GrammarlyGrammarly
INDUSTRY REPORT

Grammarly's AI 'Expert Editors' Tool Faces Backlash for Unauthorized Voice Cloning of Journalists and Authors

2026-03-21
GrammarlyGrammarly
POLICY & REGULATION

Grammarly Faces Class Action Lawsuit Over Unauthorized Use of Names in AI 'Expert Review' Feature

2026-03-17

Comments

Suggested

AnthropicAnthropic
RESEARCH

Inside Claude Code's Dynamic System Prompt Architecture: Anthropic's Complex Context Engineering Revealed

2026-04-05
OracleOracle
POLICY & REGULATION

AI Agents Promise to 'Run the Business'—But Who's Liable When Things Go Wrong?

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us