Meta is facing a U.S. lawsuit over privacy practices tied to its AI-enabled smart glasses after reports revealed that third-party workers reviewed user recordings, including intimate and highly sensitive moments. The disclosures, stemming from an investigation into a Kenya-based subcontractor, have already drawn scrutiny from the U.K.’s Information Commissioner’s Office and reignited a global debate over “always-on” consumer AI devices.
The Lawsuit and Key Allegations Against Meta and Partners
The complaint, brought by New Jersey resident Gina Bartone and California resident Mateo Canu and filed by Clarkson Law Firm, accuses Meta of misleading consumers about how recordings from Ray-Ban Meta smart glasses are used. Plaintiffs argue that the glasses were pitched with phrases such as “designed for privacy” and “controlled by you,” while failing to clearly warn that human reviewers could watch clips captured in private settings.
Beyond Meta, the filing names Luxottica of America, the glasses manufacturing partner, alleging violations of consumer protection and false advertising laws. The suit highlights the absence of a meaningful opt-out for human review, asserting that customers reasonably believed sensitive content would not be exposed to overseas contractors.
Inside the Content Review Pipeline for Smart Glasses
Meta says that media remains on-device unless users choose to share with Meta AI or others, and that a mix of contractors and internal teams may review shared content to improve product performance—disclosures it points to in its privacy terms. Company statements also describe filters and face-blurring to reduce identifiability, but reporting from Swedish outlets and follow-up coverage suggested that blurring did not consistently work and that reviewers saw nudity, sex, and bathroom footage.
The BBC has noted that Meta’s U.K. AI terms reference human review; a U.S. version of Meta’s policy similarly states that interactions with AIs—including content sent to them—may be reviewed, manually or automatically. What’s driving backlash is the gap between fine-print policy language and the marketing narratives that imply strong, user-controlled privacy by default.
Scale amplifies the stakes. In 2025, more than seven million people reportedly purchased Meta’s smart glasses. Even a small share of users opting to share content with Meta AI could yield a substantial stream of sensitive recordings flowing through labeling and quality-assurance pipelines.
Regulatory Pressure Mounts on AI Wearables and Privacy
The U.K.’s privacy regulator has opened an inquiry into the revelations, focusing on whether bystanders and users were adequately protected and informed. In the U.S., while the lawsuit proceeds in civil court, experts note that the Federal Trade Commission has broad authority to police deceptive or unfair practices; marketing that promises robust privacy while quietly enabling human review is a classic flashpoint for Section 5 scrutiny.
State laws add complexity. Under frameworks such as the California Consumer Privacy Act as amended by the CPRA, companies must provide clear notice and honor user rights around the collection and use of personal information, particularly sensitive data. If any biometric processing were involved—such as face templates for recognition—additional state laws could be implicated. Internationally, the GDPR would demand clear legal bases, purpose limitation, and strict controls on cross-border transfers to processors in countries like Kenya.
Marketing Claims Versus Device Realities
Smart glasses straddle a tricky line: they can store media locally and still route data to the cloud when users invoke AI features. Subtle defaults and prompts—what’s automatically uploaded, what’s analyzed on-device, when the LED capture indicator lights up—make the difference between a genuinely private device and a “luxury surveillance” tool.
The controversy mirrors earlier episodes across consumer tech. Contractors for Apple, Amazon, and Google have at times reviewed voice snippets to improve assistants, occasionally overhearing personal moments, which led to policy changes, stricter on-device processing, and clearer opt-outs. Wearable cameras multiply the privacy surface area because they can incidentally record bystanders who never agreed to be part of a data pipeline.
What to Watch Next as Legal and Regulatory Actions Unfold
The plaintiffs seek relief that could include stronger disclosures, an effective opt-out from human review, and independent audits of data handling. Privacy engineers say the industry’s near-term fixes are straightforward: default to on-device processing where feasible, apply robust redaction before any human access, minimize retention windows, and separate model training data from identifiable user media.
Standards bodies and regulators have offered playbooks that fit this moment. The NIST AI Risk Management Framework emphasizes data minimization and role-based access controls; privacy certifications inspired by ISO/IEC 27701 push for auditable governance; and regulators increasingly expect “privacy by design” to be reflected not just in technical architectures but also in advertising claims.
For Meta and peers, the outcome will shape norms for AI wearables. If courts or regulators find a mismatch between promises and practice, expect new baselines: prominent in-product notices, granular toggles for human review, and clearer signals to bystanders. With millions of devices in circulation, the margin for ambiguity is shrinking fast.