Meta's AI Glasses Send Sensitive Footage to Human Reviewers in Kenya, Investigation Reveals
Key Takeaways
- ▸Human contractors in Kenya reviewing Meta smart glasses footage have seen bathroom visits, nudity, and intimate moments despite Meta's privacy claims
- ▸Automatic face-blurring systems intended to protect privacy do not always work as intended, leaving individuals identifiable
- ▸At least one class action lawsuit has been filed accusing Meta of false advertising and privacy law violations
Summary
An investigation by Swedish newspapers Svenska Dagbladet and Göteborgs-Posten has revealed that Meta's AI-powered smart glasses are sending sensitive and intimate footage to human reviewers based in Nairobi, Kenya. According to the report, contractors working as AI annotators have viewed videos captured by the glasses showing bathroom visits, naked individuals, and other private moments. The contractors, who label data to help train AI systems, reported that while faces are supposed to be automatically blurred, the system "does not always work as intended," leaving some faces visible along with other sensitive information such as bank cards.
The investigation has already prompted at least one proposed class action lawsuit against Meta, accusing the company of violating false advertising and privacy laws. The lawsuit alleges that Meta failed to disclose that using the glasses' AI features would result in strangers viewing users' most private moments, despite the company's claims that the glasses were "designed for privacy." The lawsuit argues that Meta assumed a duty to disclose material facts that would inform consumers' purchasing decisions but instead concealed the reality of human review processes.
Meta's Ray-Ban and Oakley smart glasses feature a built-in AI assistant that can answer questions about what users see, and the product line has experienced significant growth in popularity despite mounting privacy and surveillance concerns. The glasses represent Meta's push into wearable AI technology, but this revelation raises serious questions about the company's data handling practices and the adequacy of its privacy safeguards. The incident highlights the often-hidden human labor behind AI systems and the potential risks when sensitive personal data is processed by third-party contractors in the AI training pipeline.
- The incident exposes the hidden human labor in AI systems and raises questions about data handling in the AI training process



