OpenAI Found to Have Violated Canadian Privacy Laws; Agrees to Enhanced Safeguards After Watchdog Investigation
Key Takeaways
- ▸Canada's privacy watchdog found OpenAI violated PIPEDA (Personal Information Protection and Electronic Documents Act) through overly broad data collection for training ChatGPT
- ▸OpenAI scraped publicly available internet data without consent or transparency to train GPT-3.5 and GPT-4, exposing Canadians to potential harm
- ▸OpenAI has agreed to enhanced safeguards including data retention policies, reduced training data collection, and transparency measures in both English and French
Summary
Canada's privacy watchdog and provincial commissioners from Quebec, British Columbia, and Alberta concluded Wednesday that OpenAI violated the country's privacy laws by engaging in "overly broad" data collection to train its ChatGPT models. The investigation, launched in 2023, found that OpenAI scraped publicly available online information without transparency or consent from Canadians to train GPT-3.5 and GPT-4 models, exposing Canadians to potential risks of harm including data breaches and discrimination.
The report criticized OpenAI for rushing ChatGPT to market without proper privacy safeguards in place, only addressing concerns after launch. Privacy Commissioner Philippe Dufresne stated that OpenAI had "known privacy issues" that should have been resolved before deployment. The investigation found that Canadians had no way to access their personal information or request corrections or deletions.
In response, OpenAI has agreed to implement additional safeguards including data retention policies, significantly limited the information used to train new ChatGPT models, and committed to transparency measures in both official languages about its data practices. The Privacy Commissioner said ChatGPT is safe to use today with these commitments in place, though the report highlights the need to modernize Canada's privacy laws for the artificial intelligence age.
The investigation underscores broader concerns about AI companies' data practices and prompted calls for stronger regulatory guardrails. While OpenAI disagreed with some findings, the company agreed to the recommended measures and continued compliance monitoring.
- Privacy commissioners called for modernizing Canada's privacy laws to address AI-specific challenges, noting the company exploited gaps in current regulations


