Meta warned that its facial‑recognition glasses could arm sexual predators.
Photo by Steve Johnson on Unsplash
Meta faces a coalition of over 70 civil‑rights groups urging it to scrap facial‑recognition on its Ray‑Ban and Oakley smart glasses, warning the internal “Name Tag” feature could let stalkers, abusers and federal agents silently identify strangers, Wired reports.
Key Facts
- •Key company: Meta
Meta’s internal “Name Tag” roadmap, first exposed by The New York Times in February, outlines two possible roll‑outs for its upcoming Ray‑Ban and Oakley smart glasses. In the narrower version, the AI‑powered assistant would surface data only on people the wearer already follows on a Meta platform; the broader version would pull up any public Instagram or Facebook profile that the camera sees, according to the memo obtained by Wired. Engineers are still debating which path to take, but the coalition of more than 70 civil‑rights groups says the very idea of a silent, on‑the‑fly facial‑recognition overlay is a privacy nightmare that can’t be fixed with opt‑outs or “incremental safeguards.” Their open letter to Mark Zuckerberg argues that by the time a user can discover they’ve been identified, the damage—whether a stalker’s obsession or a federal agent’s surveillance—may already be done.
The groups’ demand goes beyond a simple product tweak. In their letter, the ACLU, EPIC, Fight for the Future, Access Now, and dozens of other advocates call on Meta to “kill the feature before launch” and to disclose any known instances where its wearables have been used for stalking, harassment, or domestic‑violence cases. They also want Meta to reveal any past or ongoing talks with ICE, CBP, or other law‑enforcement agencies about harvesting data from the glasses, and to commit to a transparent consultation process with independent privacy experts before any biometric identification ever reaches a consumer device. “People should be able to move through their daily lives without fear that stalkers, scammers, abusers, federal agents, and activists… are silently and invisibly verifying their identities,” the coalition writes, underscoring that bystanders have no meaningful way to consent to being scanned in public spaces.
Meta’s own Reality Labs memo, obtained by The New York Times and cited by Wired, reveals a calculated timing strategy: the company plans to debut the glasses “during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns.” The coalition brands this a “vile behavior” that exploits rising authoritarianism and the Trump administration’s “disregard for the rule of law.” By positioning the launch as a distraction, Meta appears to be betting that the controversy will be muted while the product silently slips into the market—a move that civil‑rights advocates say could hand powerful new tools to predators and state actors alike.
While Meta has not responded to Wired’s request for comment, the silence itself adds weight to the coalition’s warning. EssilorLuxottica, the parent company behind Ray‑Ban and Oakley, also declined to comment, leaving the public with only the internal documents and the coalition’s stark warning to gauge the risk. If the broader “Name Tag” version proceeds, anyone wearing the glasses could, in theory, point at a stranger on a crowded street and instantly retrieve a trove of personal data—habits, relationships, health information—drawn from the subject’s public social‑media footprint. That capability, the coalition argues, is fundamentally at odds with any reasonable expectation of privacy in public, and it cannot be remedied after the fact.
The debate over Meta’s smart‑glass ambitions is a microcosm of a larger clash between emerging biometric tech and civil‑society safeguards. As wearable AI becomes more discreet, the line between convenience and coercion blurs. The coalition’s demand for a complete shutdown of “Name Tag” before any consumer rollout reflects a growing consensus that, when it comes to facial‑recognition on everyday accessories, the default should be “no” until robust, enforceable protections are proven—not merely promised.
Sources
Reporting based on verified sources and public filings. Sector HQ editorial standards require multi-source attribution.