Meta’s AI Glasses Relay Sensitive Footage to Kenyan Reviewers, Prompting Class‑Action
Photo by Jezael Melgoza (unsplash.com/@jezar) on Unsplash
Meta’s AI glasses sent users’ bathroom, nude and intimate footage to human reviewers in Kenya, the Verge reports, citing an investigation by two Swedish newspapers that identified Nairobi‑based contractors viewing the content.
Key Facts
- •Key company: Meta
Meta’s Ray‑Ban Meta smart glasses rely on an on‑device AI pipeline that flags “sensitive” content before it is uploaded to Meta’s servers, but the investigation by Sweden’s Svenska Dagbladet and Göteborgs‑Posten reveals that flagged clips are routinely handed off to human annotators in Nairobi for manual review. The contractors, who work for a third‑party data‑labeling firm, reported seeing video of users entering bathrooms, changing clothes, and engaging in sexual activity, confirming that the “privacy‑by‑design” claim in Meta’s marketing does not extend to the final stage of the data‑processing chain (The Verge).
The class‑action complaint filed in federal court in San Francisco alleges that Meta’s advertising “affirmatively misled” consumers by asserting that the glasses were built to protect privacy while concealing the reality that a stranger halfway around the world may view their most intimate moments. The filing, prepared by Clarkson Law Firm, cites the Swedish newspapers’ reporting as the factual basis for its claims and names two purchasers from California and New Jersey who relied on Meta’s privacy assurances when buying the device (Engadget). The suit seeks both monetary damages and injunctive relief to halt the practice of sending user‑generated footage to overseas reviewers.
According to the Swedish outlets, the Nairobi‑based annotators are tasked with labeling objects, actions, and scenes in the video streams to improve Meta’s computer‑vision models. The work instructions explicitly require them to flag “intimate” material, which includes bathroom visits and sexual encounters, for further analysis. The contractors said they are not warned that the content originates from a consumer‑facing product marketed as “privacy‑focused,” creating a disconnect between Meta’s public messaging and the internal data‑collection workflow (The Verge).
Meta has not publicly responded to the lawsuit or the Swedish investigation, but the company’s prior handling of AI‑training data suggests a pattern of outsourcing large‑scale annotation to low‑cost labor markets. In 2023, Meta disclosed that it employs thousands of contractors in Southeast Asia and Africa to label images for its AI models, a practice that has drawn criticism for inadequate worker protections. The current allegations extend that model to a wearable device that captures continuous first‑person video, raising the stakes for privacy law compliance under California’s Consumer Privacy Act and similar statutes (Engadget).
If the plaintiffs succeed, Meta could face significant financial exposure and be forced to redesign its data‑pipeline to ensure that any human review of captured footage occurs only after explicit user consent and with robust safeguards. The case also spotlights a broader industry tension: the trade‑off between rapid AI model improvement and the ethical treatment of both end‑users and the low‑wage annotators who power those models. As regulators worldwide tighten privacy rules, companies that market “privacy‑preserving” hardware may need to reconcile their promotional claims with the opaque realities of their AI training ecosystems.
This article was created using AI technology and reviewed by the SectorHQ editorial team for accuracy and quality.