Grammarly Disables AI Author-Impersonation Tool Following Legal Action and Public Backlash
Key Takeaways
- ▸Superhuman/Grammarly's Expert Review feature used the names and personas of hundreds of writers without consent to drive subscriptions, leading to a class-action lawsuit alleging damages exceeding $5 million
- ▸The lawsuit, led by investigative journalist Julia Angwin, has already generated significant momentum with over 40 additional complainants coming forward within 24 hours of filing
- ▸The company initially defended the feature with an opt-out mechanism rather than seeking explicit consent, which legal experts argue inappropriately shifted the burden onto the impersonated individuals
Summary
Grammarly has disabled its Expert Review feature, an AI tool that mimicked the writing styles of prominent authors and academics including Stephen King and Carl Sagan, after facing significant backlash and a multi-million dollar class-action lawsuit. The feature, which provided writing feedback "inspired by" famous writers' personas without their consent, was taken down this week following criticism from impersonated individuals who argued their identities were being misappropriated for commercial gain. Investigative journalist Julia Angwin, lead plaintiff in the lawsuit filed in the Southern District of New York, expressed shock at finding her professional identity marketed as a commercial product, describing the AI imitations as poor-quality "sloppergangers" that provided inferior editing advice. CEO Shishir Mehrotra apologized on LinkedIn, acknowledging the tool had "misrepresented" expert voices, though the company initially resisted full removal by offering an opt-out mechanism rather than seeking consent upfront.
- The impersonated writers criticized not only the ethical violation but also the poor quality of the AI-generated advice, which often made writing worse while falsely attributed to experts
Editorial Opinion
This incident highlights a critical blind spot in how generative AI companies are deploying identity-based features without establishing clear consent frameworks. While Grammarly's apology and feature removal demonstrate some corporate accountability, the fact that the company launched this tool without proactively securing permission from the people being impersonated suggests a troubling approach to AI ethics and intellectual property. The lawsuit's emphasis on the poor quality of the imitations adds another dimension to the concern—not only were identities used without consent, but the tool's mediocre performance risked damaging the reputations of the impersonated experts. This case will likely set important precedent for how AI companies must handle identity, consent, and the monetization of human likeness in the generative AI era.


