Grammarly Offers Opt-Out for AI Training Data Usage After Privacy Backlash
Key Takeaways
- ▸Grammarly enabled AI training on user data without clear upfront disclosure
- ▸The company chose to offer opt-out rather than halt the practice or implement opt-in consent
- ▸Users must actively contact the company via email to exclude their data from AI training
Summary
Grammarly has faced significant backlash over its use of user data to train its AI editor without explicit consent. Rather than apologizing or discontinuing the practice, the company is now offering users an opt-out mechanism, allowing individuals to prevent their writing from being used for AI model training. Users can request to be excluded by emailing [email protected]. The move represents a defensive response to privacy concerns, as the feature was reportedly already in operation before users were widely informed about it.
- The controversy highlights broader privacy tensions in AI-powered productivity tools
Editorial Opinion
Grammarly's response to the privacy backlash demonstrates a troubling pattern in the AI industry: companies quietly integrate user data into training pipelines and only offer opt-out options after being caught. A true privacy-first approach would require explicit opt-in consent before using personal writing data for AI training. This reactive posture, while technically addressing user concerns, reflects a fundamental misalignment between how AI companies operate and what users reasonably expect regarding their personal data.



