I tried out the Meta Ray-Ban Display prototype for an hour, and one as-yet-unshipped feature really tied everything together. It is a silent, almost imperceptible way of typing in midair — a handwriting gesture that translates tiny finger motions into letters and numbers. In combination with Meta’s in-lens color display and a neural wristband, it’s what makes these glasses feel less like a novelty and more like the real deal of phone replacement.
What the Band and Display Actually Do and How They Work
The right lens contains a color microdisplay that only the wearer can see. It’s sharp enough for video calls, social feeds, and rich cards served by the onboard assistant. Unlike HUDs with prisms, the image is comfortably positioned in your field of view without a distracting optical block, and from the outside the lens appears no differently than standard glasses.
Control is from a soft neural wristband reading tiny electric signals from your forearm. A thumb–index pinch is select, a thumb–middle pinch goes home, and gentle swipes take you places — even if your hand’s in a sleeve. This isn’t speculative tech either; EMG input with single-digit millisecond latency has been shown by Facebook Reality Labs ever since it acquired CTRL-Labs, and the demo here felt well beyond gimmick stage.
The Air Writing Trick That Solves Everything
The yet-to-be-released feature is a handwriting gesture that allows you to “write” in the air with slight finger movements, similar to tracing letters on a notepad only you can see. In my testing, both script and print were recognized with shocking accuracy, even when I tested it by going faster or got a little sloppy. The assistant showed me what I was writing in the lens, then fixed it on the fly.
Why it matters: text input is the bottleneck for wearables. Voice is quick but intrusive; virtual keyboards are clunky; paired phones miss the point. Air writing helps keep you quiet, accurate, and heads-up. And it unlocks contexts that voice has trouble with — meetings, transit, shared spaces — without the social friction of talking to your glasses.
Living Without the Phone: Daily Basics on the Glasses
Between the display and band, I ran through the basics of what I usually do on a phone. I responded to messages, browsed social updates, played music, pulled down photos, and participated in a video call. The assistant’s multimodal layer was more than just a nice-to-have gimmick: recipes came as tidy step cards, navigation directions hovered at a glance, and visual responses accompanied spoken ones.
Real-time captions were a standout. The glasses wrote the conversation out as text in front of my eyes, a feature that seemed useful in such a loud environment and potentially game-changing for people who are hard of hearing. Hearing research has made clear that getting good captions for comprehension in complex audio situations is beneficial long-term; bringing that benefit into your everyday field of vision is an exciting step for accessibility.
I also tried visual understanding. Point at a flower, ask what it is, and the name with a reference photo comes up. Prompt an image to be generated, and there it is in-lens, instantly. The latter is more of a tech flex than an everyday habit at the moment, but it feels like a way to carry around a creative canvas with you.
Comfort, Battery Life, and Price Considerations
Weight comes in at 69 grams for the frames I’ve tested, versus around 52 grams for the earlier Ray-Ban pair sans display. They just feel like chunky acetate eyeglasses — noticeable but not draggy, an hour in. The neural band feels like a tight fitness strap, and before you know it, you’re not thinking about it at all as you start to interact.
Meta’s claim of up to six hours on a mixed-use charge — which will have to be put to real-world scrutiny — is roughly in line with what early AR eyewear usually delivers. These are obviously aimed squarely at the early adopters/professional sector that covets glanceable computing; they’ll supposedly cost $799, though. For those groups, the value proposition increases substantially when private, accurate text input is possible.
A Note on Privacy, Discretion, and Social Fit
Subtle displays and silent input beg some pretty clear questions. The telltale camera light persists, but the prospect of reading on-screen messages or composing notes surreptitiously can be discomfiting to those around you. The question is whether there’ll be clearer signaling standards for wearable capture and display; makers should follow the lead of EssilorLuxottica and Meta in transparent defaults and controls.
On the other hand, the same discretion enables access use cases — like private captions, translation, and reminders — that are hard to deliver any other way. As was the case with smartphones a decade ago, norms will follow utility if there is a tangible and respectful advantage.
Where This Is Headed for Everyday Wearable Computing
Meta’s executives have said time and again that using neural interfaces is the next big step in wearables. In public Q&A sessions, CTO Andrew Bosworth has even pondered aloud about a road from neural bands to watch-like products. If this handwriting gesture ships in volume, it will confirm that roadmap: compute migrates from hands to eyes and input contracts down to intent-level motion.
Smart glasses have had it rough because they ask you to trade comfort and lack of social embarrassment for a paucity of capability. The Meta Ray-Ban Display prototype, especially with air writing, reverses that equation. It provides sufficient utility to let you leave your phone in your pocket — and sometimes, to forget it’s there entirely.