Meta employee alleges coworker emerges from bathroom naked, sparks workplace scandal
Photo by Hakim Menikh (unsplash.com/@grafiklink) on Unsplash
Meta employee alleges a coworker emerged from a bathroom naked, igniting a scandal over privacy breaches at the firm, Svd reports.
Key Facts
- •Key company: Meta
Meta’s AI‑glass rollout has been built on a hidden layer of outsourced data work that raises serious privacy questions, according to an investigation by Svenska Dagbladet (Svd). The report describes how thousands of annotators in Nairobi, employed by the subcontractor Sama, sift through raw video streams from the “Meta Ray‑Ban Glasses” and label every visual element – from traffic signs to human bodies – before the footage is fed into Meta’s proprietary models. The workers, who are paid to draw bounding boxes around objects and transcribe spoken words, say they have seen “someone going to the toilet, or getting undressed” in the raw feeds, and they suspect the subjects are unaware they are being recorded (Svd). The allegation that a coworker emerged from a bathroom naked, shared by a Meta employee, is the latest flashpoint in a broader pattern of privacy‑sensitive data being harvested without explicit consent.
The Svd investigation links the Nairobi annotation pipeline directly to the product launch in Menlo Park, where Mark Zuckerberg unveiled the glasses in September 2025. During the presentation, Zuckerberg wore the prototype and the audience saw a live feed of his point‑of‑view, a demonstration meant to showcase real‑time translation, facial recognition and “all‑in‑one” assistance (Svd). Behind that polished demo, the raw video was already being processed by the Kenyan annotators, who are tasked with “describing, labeling and quality‑assuring” each frame before it reaches Meta’s training servers. The scale of the operation is hinted at by the “over 9,300 miles” distance between the Silicon Valley launch and the Nairobi office, underscoring how the company relies on low‑cost labor to power its AI pipeline.
The privacy breach claim has sparked internal backlash at Meta. Employees who have spoken to Svd describe a “stifling” atmosphere, noting that the company’s internal policies promise user control over data, yet the annotation workflow effectively strips that control away. One Nairobi worker, speaking on condition of anonymity, warned that “if they knew they wouldn’t be recording,” suggesting that subjects are not informed that their most intimate moments could be captured and processed (Svd). This mirrors earlier revelations from The Verge, where former Facebook moderators broke NDAs to expose systemic issues in content‑moderation practices, highlighting a pattern of opaque labor conditions and insufficient safeguards for both workers and end‑users.
Meta’s leadership has not publicly responded to the specific bathroom allegation, but the company’s broader stance on privacy has come under scrutiny in recent legal filings. Reuters reported that Zuckerberg blocked proposed curbs on “sex‑talking” chatbots for minors, a move that regulators say could exacerbate risks associated with unmoderated content (Reuters). While the chatbot case is distinct, it reinforces concerns that Meta’s internal risk assessments may prioritize product rollout over user safety. The convergence of these stories – the Nairobi annotation pipeline, the bathroom incident, and the chatbot controversy – paints a picture of a company struggling to reconcile its aggressive AI ambitions with the ethical obligations of handling sensitive personal data.
Analysts note that the scandal could have material repercussions. Meta’s AI‑glass venture is positioned as a flagship product intended to compete with smartphones, and any erosion of consumer trust could hinder adoption. Moreover, the reliance on subcontracted annotators raises potential regulatory exposure, especially in jurisdictions tightening data‑privacy laws. If investigations confirm that Meta’s systems captured and stored non‑consensual footage, the firm could face class‑action lawsuits and enforcement actions from data‑protection authorities. The episode also adds pressure on Meta’s board to enforce stricter oversight of third‑party labor practices, a demand echoed by employee advocacy groups highlighted in the Svd report.
In the short term, Meta is likely to double‑down on internal reviews of its data‑collection pipelines while attempting to contain the public fallout. The company’s official communications emphasize that the glasses are “designed with privacy in mind,” yet the Svd findings suggest a disconnect between policy and practice. As the scandal unfolds, stakeholders—from investors to privacy regulators—will be watching how Meta reconciles its AI aspirations with the need for transparent, consent‑driven data handling.
Sources
This article was created using AI technology and reviewed by the SectorHQ editorial team for accuracy and quality.