Meta’s Smart Glasses Track Users, Sparking Widespread Privacy Concerns
Photo by Julio Lopez (unsplash.com/@juliolopez) on Unsplash
Meta’s Ray‑Ban smart glasses are sending users’ intimate video—bathroom trips, dressing, sex—to third‑party contractors for uncensored review, a joint investigation by Svenska Dagbladet and Göteborgs‑Posten finds, Gizmodo reports.
Key Facts
- •Key company: Meta
Meta’s Ray‑Ban smart glasses have reportedly shipped more than seven million units, yet the privacy fallout is only now emerging. A joint investigation by Sweden’s Svenska Dagbladet and Göteborgs‑Posten, cited by Gizmodo, found that raw video captured by the glasses is routinely handed to a Kenya‑based contractor, Sama, for human annotation. The workers are tasked with labeling every identifiable element on screen to train Meta’s computer‑vision models, but the review pipeline appears to lack any pre‑screening filter. As a result, contractors have been exposed to footage of users entering bathrooms, undressing, handling credit‑card transactions and even recording sexual activity, all “uncensored,” according to the investigation.
The practice hinges on Meta’s own terms of service for its AI‑enabled products, which explicitly permit both automated and manual review of user content by “third‑party vendors” to “provide, maintain, and improve Meta services” and to ensure compliance with law, Futurism notes. The documents do not distinguish between benign interactions and intimate moments, effectively granting Meta blanket authority to forward any captured video to external annotators. Contractors interviewed by the Swedish papers said they are expected to label the material without question; refusal or hesitation could cost them their jobs, a pressure that underscores the asymmetry between the workers’ consent and the users’ expectations of privacy.
The exposure of private moments is not merely anecdotal. One Sama employee recounted seeing a wearer place the glasses on a bedside table while a spouse entered the room and began to undress, unaware of the ongoing recording. Another described footage of a wearer watching pornography or even filming themselves during sex. In retail settings, annotators reported seeing credit‑card numbers and text messages displayed on phones, suggesting that the glasses capture a continuous “firehose” of data that is not automatically redacted. The lack of a robust culling mechanism means that even accidental recordings—such as a user forgetting to disable the device—are treated the same as intentional content, amplifying the risk of inadvertent data leakage.
Meta’s response to the revelations has been muted. While the company has long marketed the glasses as a seamless bridge between the physical world and its AI ecosystem, it has not publicly addressed the specific allegations of third‑party human review of intimate footage. The investigation highlights a broader tension in the wearable‑tech market: as devices become more capable of continuous visual capture, the line between useful training data and invasive surveillance blurs. CNET’s coverage of upcoming smart‑glass offerings notes that competitors like Xreal and Viture are also pushing “wild options” for 2026, but none have yet disclosed comparable annotation pipelines, raising questions about industry‑wide standards for data handling.
Privacy advocates argue that Meta’s model violates the expectation of informed consent. The terms of service, while legally binding, are rarely read in full by consumers, and the clause allowing “manual review…through third‑party vendors” is buried among technical jargon. The Swedish investigation suggests that users are effectively forced to surrender a continuous video feed of their daily lives to an opaque supply chain, with little recourse once the data leaves Meta’s servers. As the debate over wearable surveillance intensifies, regulators may scrutinize whether Meta’s practices comply with GDPR’s strict requirements on data minimisation and purpose limitation, especially given the cross‑border transfer of footage to Kenya.
The episode serves as a cautionary tale for the next wave of AR wearables. If Meta’s annotation pipeline remains unfiltered, the company risks eroding trust in a product category that relies on user comfort and consent. Industry observers, including those at The Verge, have already documented misuse of the glasses for real‑time doxxing on college campuses, underscoring the technology’s dual‑use potential. Until Meta implements rigorous privacy safeguards—such as on‑device redaction, opt‑out mechanisms for human review, and transparent reporting of third‑party data flows—its smart glasses are likely to remain a flashpoint for privacy debates rather than a mainstream success.
Sources
This article was created using AI technology and reviewed by the SectorHQ editorial team for accuracy and quality.