Last week, at Meta’s all-hands meeting, Mark Zuckerberg made his most aggressive play yet to shift computing off the phone and onto your face. The centerpiece: smart Ray-Ban glasses with a built-in display juxtaposed to something called the Neural Band — think along the lines of wrist-worn jewelry that turns small finger movements into text. The pitch isn’t just novelty. It’s an attempt to replace the compulsive, heads-down smartphone habit with something more akin to a twist of the FM radio dial or a ramble through a used-book store — and, eventually, to challenge Apple and Google for control of our mobile platforms.
Why Glasses, and Why Now, for Everyday Computing
Smartphones are mature, saturated, and increasingly legislated against; consumer frustration with screen time is on the rise. Digital overload is on the rise across generations, but younger internet users are especially at risk, according to Pew Research Center. Meta’s framing is that eyewear re-establishes “presence” in real-world interactions by demoting the phone from main screen to backup tool.

There’s also a blunt business reality. Meta pays platform tolls to access billions of users through Apple’s App Store and Google Play. Being the steward of the next general-purpose device would hand Meta distribution power, lower fees, and new commerce and ad surfaces it controls end-to-end.
Glasses + Neural Band: The Input Breakthrough
The familiar formula — cameras, speakers, and microphones for screenshots and voice commands; a voice assistant embedded in the frame itself — is here, but with an added offset display to show glanceable content such as messages, directions, or translations across Instagram, WhatsApp, and Facebook. But it’s the Neural Band that makes the system feel less like a toy.
It works by reading the electrical signals that move from your brain to your hand using sEMG, or surface electromyography. Squeeze your fingers as though gripping a pen and “write” in the air, and the band translates those micro-motions into text. Onstage, Zuckerberg said he’s already at about 30 words per minute. For comparison, a major 2019 study by researchers at Aalto University and the University of Cambridge found that average smartphone typing speeds were nearly 36 words a minute. Reality Labs’ in-house early testers, by comparison, score an average of about 21 words per minute — promising for a first generation that doesn’t have you talking to your glasses on the bus.
Gesture control is nothing new — think Nintendo’s Joy-Cons or Apple Watch’s finger taps — but the aspiration here is finer-grained, silent input you can use anywhere. If Meta can deliver this sort of experience that is reliable, comfortable, and fast, the smartphone’s primary advantage — quick text manipulation and app control — begins to peel away.
A Shot at Presence, Not Just Performance
It’s the cure to the potential awkwardness in a cafe of voice commands. With silent sEMG input, smart glasses could become socially acceptable and allow interaction to return to quick glances rather than full-screen obstruction. That’s the philosophical reach: computing that retreats into your periphery, not to the center of your awareness.
The cautionary tales are real. The early version of Google Glass annoyed bystanders over privacy concerns, and other recent efforts at screenless wearables have encountered problems with battery life, heat, and real-world usability. Anything less could imperil how Meta’s glasses feel — comfortable or uncomfortable, power efficient or a drain, and full of everyday tasks like messaging, navigation, and translation that feel faster than pulling out a phone.

The Stakes: A New War Over Mobile Platforms
Meta’s Reality Labs has accumulated more than $70 billion in operating losses since 2020, according to company disclosures — a jaw-dropping bet that spatial computing will be the next mass platform. (The Ray-Ban line has already sold millions, providing Meta with a rare consumer beachhead compared to enterprise-first headsets and niche AR experiments.)
Apple’s Vision Pro offered a high-fidelity, high-price approach to spatial computing; Google is regrouping after multiple AR resets. Meta’s angle is something else entirely: lifestyle-friendly glasses at scale, underpinned by AI and accompanied by a novel input device. Nonetheless, if developers have hot apps to build for it and the Neural Band keeps getting better, lock-in may just switch from the phone OS to whatever’s living on your face/wrist.
Adoption Hurdles: Privacy, Style, Battery
Cameras on your face are bound to draw attention. Clear recording indicators, strict data policies, and opt-in controls aren’t nice to have; they’re table stakes. Then there are the matters of hardware: a battery that can last all day, heat management, shaped lenses or ones with prescription support, and styles people actually want to wear. Price will ultimately determine whether this becomes a mainstream upgrade or a tech-enthusiast treat.
Another swing factor is on-device AI. Translation, summarization, and visual understanding have to feel instant and interior. As long as tasks keep defaulting back to the cloud — and in some cases fail on spotty Wi‑Fi — the phone is still ostensibly the safer default.
What Will Make It a True Smartphone Killer?
Three milestones will reveal if Zuckerberg’s gambit works: consistent text input speeds comparable to phones; a growing developer community building must-have glanceable apps for the wrist; and legitimate responses to privacy and fashion qualms. Partnerships with carriers and retailers could help speed adoption, but word-of-mouth — do these really make daily life better? — will decide it.
The smartphone won’t vanish overnight. But if Meta can help computing feel less heavy, slower, and in-your-face-ish, the phone could quickly, quietly become a second screen. That’s the endgame: not a louder device, but a quieter one that gradually supplants the habit we all formed around a glowing rectangle.