Journalist Files Class-Action Lawsuit Against Grammarly Over Unauthorized Use of Identity in AI Feature
Key Takeaways
- ▸Grammarly used real identities—including journalists from The Verge and other publications—in its AI feature without consent
- ▸The class-action lawsuit argues the company violated laws protecting individuals' rights to control commercial use of their identity and likeness
- ▸Grammarly disabled the "Expert Review" feature after the controversy emerged, and the CEO apologized for the privacy violation
Summary
Journalist Julia Angwin has filed a class-action lawsuit against Grammarly after discovering that her identity was used without consent in the company's "Expert Review" AI editing feature. The lawsuit alleges that Grammarly violated privacy and publicity rights by using the identities of real people, including journalists and academics, to generate AI-powered writing suggestions without obtaining permission. Grammarly initially launched an opt-out mechanism this week before disabling the feature entirely in response to the backlash. CEO Shishir Mehrotra acknowledged the company "fell short" and apologized, stating that Grammarly will "rethink our approach going forward."
Editorial Opinion
This case highlights a critical gap in how AI companies are handling the use of real people's identities and reputations in generative AI systems. While Grammarly's intent to surface expert perspectives may have been well-meaning, the execution—deploying the feature without explicit consent—represents a troubling pattern in the AI industry of prioritizing innovation over individual rights. The swift legal response and feature shutdown suggest that companies can no longer assume they can use people's names and associated expertise without permission, even when framed as beneficial.


