ICO urges Meta to address concerns in new AI smart‑glasses report.
Photo by Harpal Singh (unsplash.com/@aquatium) on Unsplash
The UK ICO has written to Meta, urging action after a BBC‑reported investigation found outsourced workers could view intimate footage captured by the company’s AI‑enabled Ray‑Ban glasses.
Key Facts
- •Key company: Meta
Meta’s Ray‑Ban smart glasses have sparked a regulatory firestorm after a joint investigation by Sweden’s Svenska Dagbladet and Göteborgs‑Posten revealed that outsourced annotators in Kenya were routinely exposed to highly intimate footage captured by the devices. According to the newspapers, workers employed by Nairobi‑based data‑labeling firm Sama described reviewing videos that included “glasses‑wearers using the toilet or having sex,” with one employee stating, “We see everything – from living rooms to naked bodies.” The investigation, first reported by the BBC, found that the content was examined to “teach Meta’s AI to interpret images,” a process the company says is essential for improving the glasses’ hands‑free question‑answering feature [BBC].
Meta confirmed that human contractors sometimes review user‑generated media, citing its privacy policy which allows for both automated and manual scrutiny of interactions with its AI systems. The firm claims that recordings are filtered before human review, with techniques such as face‑blurring intended to protect privacy. However, sources quoted by SvD and GP assert that the filtering “sometimes failed,” leaving identifiable faces visible to annotators. The BBC noted that Meta could not point to the specific sections of its Supplemental Terms of Service that govern this human‑review process, despite providing a link when asked for clarification [BBC].
The UK Information Commissioner’s Office (ICO) has formally written to Meta, describing the newspaper report as “concerning” and demanding details on how the company complies with UK data‑protection law. In a statement, the ICO emphasized that “devices processing personal data, including smart glasses, should put users in control and provide for appropriate transparency,” and that service providers must clearly explain what data is collected and how it is used [BBC]. The regulator’s inquiry follows Meta’s own admission that contractors are used to “review this data to improve people’s experience with the glasses,” as outlined in the company’s privacy documentation [BBC].
Meta’s response to the BBC highlighted that users must actively trigger recording—either manually or via voice command—and that the company continuously refines its privacy safeguards. Nonetheless, the investigation suggests a gap between policy and practice: workers reported seeing “glasses‑wearers watching pornography” and even a scenario where a man’s glasses recorded his wife undressing in a bedroom. The annotators described a heavily monitored work environment, with cameras everywhere and a ban on mobile phones, yet the nature of the content they reviewed remained deeply invasive [BBC].
Industry analysts have long warned that the convergence of wearable hardware and AI raises novel privacy challenges, but the ICO’s outreach marks the first formal regulatory pushback against Meta’s smart‑glasses ecosystem. If Meta fails to demonstrate robust, verifiable controls—such as reliable face‑blurring, clear user opt‑out mechanisms, and transparent disclosures—it could face enforcement action under the UK’s Data Protection Act. The episode also adds to broader scrutiny of Meta’s data‑handling practices, which have already attracted attention from regulators in the EU and the United States. As the company prepares to roll out its AI features across more markets, the outcome of the ICO’s investigation may set a precedent for how wearable AI devices are governed worldwide.
Sources
This article was created using AI technology and reviewed by the SectorHQ editorial team for accuracy and quality.