ChatGPT Users Face Privacy Risks: Deleted Conversations Retained Due to Legal Hold, Expert Warns
Key Takeaways
- ▸Deleted ChatGPT conversations are not actually deleted due to a legal hold from the ongoing New York Times lawsuit against OpenAI
- ▸OpenAI uses conversations to train future models like GPT-5, with human trainers reading chats to improve AI responses
- ▸Even after opting out of training, OpenAI retains conversations for 30 days for safety review and compliance purposes
Summary
A new investigation reveals that ChatGPT users' conversations are far less private than commonly assumed, with sensitive data being tracked, stored, and used for AI model training despite users' deletion attempts. According to the report, OpenAI collects not only chat content but also technical data including IP addresses, device information, and account details. Most critically, a June 2025 court order in The New York Times v. OpenAI case requires OpenAI to retain all chat data—including conversations users believed they had deleted—until the lawsuit concludes, effectively making the "delete" button non-functional. The article provides step-by-step instructions for users to protect their privacy going forward, starting with disabling the "Chat History & Training" setting, though this cannot retroactively protect previously shared conversations.
- Users can disable "Chat History & Training" to prevent future conversations from being used for model training, though past data remains exposed
Editorial Opinion
This revelation exposes a critical gap between user expectations and actual data practices at OpenAI. While the legal hold is a temporary measure, it underscores the importance of AI companies providing transparent, granular privacy controls and clearer user communication about data retention. Users deserve to know that deleting a conversation doesn't mean it's gone, and the industry needs stronger default privacy protections rather than relying on users to navigate obscure settings.



