Superhuman CEO Addresses Grammarly's AI Impersonation Controversy in Tense Interview
Key Takeaways
- ▸Superhuman's Expert Review feature used cloned identities of real journalists without permission, sparking a class action lawsuit and significant industry backlash
- ▸CEO Shishir Mehrotra apologized for the feature but defended the company's approach to AI integration across its productivity suite
- ▸The incident highlights ongoing tensions between AI companies' use of creator content and the lack of clear consent frameworks in the industry
Summary
Shishir Mehrotra, CEO of Superhuman (the parent company of Grammarly, Coda, and Mail), sat down for an in-depth interview with Nilay Patel of The Verge to address the Expert Review feature controversy that sparked significant backlash from journalists and creators. In August 2025, Grammarly launched the Expert Review feature, which provided writing suggestions from AI-cloned versions of real journalists and writers—including Patel himself—without obtaining permission from those individuals. The revelation prompted outrage across the media industry, leading to a class action lawsuit filed by investigative journalist Julia Angwin and ultimately forcing Superhuman to kill the feature entirely.
The interview, which took place months after the incident, tackled difficult questions about AI ethics, creator rights, and the tension between corporate innovation and user consent. Mehrotra apologized for the decision, acknowledging the breach of trust, while also defending the company's broader vision of AI-native productivity tools. The conversation revealed fundamental disagreements about how extractive AI practices feel to creators whose work and identities were used without permission, though Mehrotra ultimately showed up to the interview despite the uncomfortable circumstances.
- Superhuman has since discontinued the Expert Review feature and implemented opt-out mechanisms, but questions remain about broader attribution vs. impersonation standards
- The controversy underscores growing concerns about how AI companies train and deploy models using real people's voices, writing styles, and identities without explicit consent
Editorial Opinion
The Grammarly impersonation incident represents a critical inflection point for AI companies: the difference between attribution and permission. While Superhuman ultimately corrected course, the initial launch reveals how easily AI companies can rationalize extractive practices when the technology enables them. This conversation demonstrates that technical capability alone doesn't answer the ethical question of whether something should be done—and that CEO accountability through direct dialogue, however uncomfortable, may be essential to establishing industry norms around creator consent.


