I spent a few days wearing Facebook Reality Labs’ new Ray‑Ban Display glasses and two innovations jumped out: a practical heads-up display, and silent, wrist-worn neural input instead of taps or voice commands. Together, they help smart glasses feel less like a novelty and more like an honest next step beyond the smartphone.
A heads‑up display that finally deserves its place on the dash
The right lens houses a full‑color display that’s ultimately bright enough to read outdoors (Meta rates it up to 5,000 nits) and offset just far enough not to overtake your field of view. It’s the first time I’ve worn consumer eyewear where information appears in my periphery without pulling me out of the current moment.

Importantly, the glasses run an OS designed for glanceability. I used on‑glass guides to compose my photos and video, then, mid‑clip, adjusted composition. But a pinch‑and‑twist zoom gesture swept me to 3x without the agony of fishing for a phone. Texted captions rolled on screen in real time, and the A.I. overlay responded to rapid-fire questions with compact visual prompts rather than a screenful of filler.
Using WhatsApp came naturally: brief responses, fast confirmations and no embarrassing voice echo to listeners on all sides. That’s the difference between a notification you dread and one you can deal with in under two seconds without breaking eye contact.
Silent neural input bests touch and voice
The companion wristband interprets tiny electrical signals it reads in your forearm muscles — technology that Meta absorbed with its CTRL‑labs purchase. But by five minutes into calibration, I was flipping through pages with micro‑gestures: pick to select, click finger to go back, a finessed swipe for the next page and a pinch‑and‑twist for volume or zoom. No melodramatic head waving, no preamble of “Hey assistant.”
Latency was phone‑grade (we felt a good bit of it), and the confidence learning curve was steep. By the end of the session, I didn’t even think about commands; I just did them. Neural input is faster in public, more private during meetings and more robust in noise than voice. All in all, it’s less obtrusive and more accurate than touchpads on frames.
It’s the other missing half of ambient computing: being able to do something at a glance without pulling out a slab of glass from your pocket. This reflects something that human‑computer interaction researchers at places like MIT and Carnegie Mellon have been saying for years: When you minimize the cost of interacting, you get different behavior. That cost is almost zero here.
What Using Them Was Like in Real‑World Use
Display clarity is solid. Pristine text was easy to read, though I occasionally had to close my left eye to decipher dense paragraphs; for prompts, captions and framing, monocular did the job. What really struck my ears was its performance with Conversation Focus: The system honed in on the person ahead of me and transcribed what they were saying, even shutting out background noise — a truly great way of doing modern speech separation and beamforming correctly.
The Live AI mode is intended to serve as a memory companion. An interface that will pay attention. Start a session and it can pull in instructions, outline steps you take, and pin its references to the context you’re seeing and hearing. Interviews, workshops, or complicated instructions? That’s flipping AI from a chatbot to a real‑time assistant.

Battery is estimated to last up to 18 hours, the equivalent of one day of light use. Frames are water‑resistant, and the fit was comfortable; I forgot within minutes that there was a wristband on my arm.
Limits and trade‑offs you should consider before buying
The kit costs $799, including the neural wristband. You can choose between two colors — black or brown — and transition lenses come standard. The trade‑off for that built‑in display is substantial: these frames don’t support prescription lenses. That’s the deal breaker for many people until prescription‑friendly versions become available.
There’s also the social contract. Privacy issues loom over cameras and displays on faces. Indicator lights would help, but conventions are slow. Technically, long stretches of reading text monocularly could be tiring to the eye (even so—making on‑glass text short worked just fine).
Retail demos will be more important than spec sheets. Partner stores run by EssilorLuxottica — say, Sunglass Hut and LensCrafters — have served as de facto laboratories for smart eyewear. You’re going to need to try before you buy here.
Why this might get us past smartphones sooner rather than later
People already reside in notification land. Data.ai’s State of Mobile research reveals heavy users spend hours in apps daily, and Pew Research Center says a large share of adults feel they check their phone too much. The remedy isn’t more phone; it’s less friction. A fast look for the right info and an unspoken gesture in reply are just that.
These glasses don’t yet do away with the need for a phone. They do something better: they erode the phone’s monopoly of attention. When a color HUD made information ambient and a neural wristband hid what went in, you begin to look up, more than just at things or even people hovering over the balance of city and landscape.
That’s the magic couple — a display you could love and live with and an input method optimized to meet real‑world needs. If developers push in the direction of—glanceable apps, private‑by‑default AI, and comprehensive accessibility support—then this won’t just be a cool category. It’ll be unavoidable.
This is a future that has been promised for a decade by smart glasses. And now, for the first time ever, I can see it — and control it without speaking a word.