The most recent attempt by Meta to make smart glasses ubiquitous came with a hefty price tag, flashy spectacles and some very public hiccups. The $799 Meta Ray-Ban Display (promised to be “high resolution” eyewear thanks to a 5,000-nit panel for outdoor clarity) stole the show at the company’s developer keynote, only for live demos to flop and devalue the idea that always‑available AI is coming straight into your face.
What Meta actually announced at its developer keynote
In addition to a new version of Ray-Ban camera glasses and a more sports-leaning Oakley model, Meta’s headline device was the Ray-Ban Display. The naming does get confusing — there are now multiple Ray-Ban-branded devices in Meta’s portfolio — but the pitch is straightforward: A pair of sunglasses that overlays crisp visuals atop your field of view, readable even in bright sunlight. It’s the established Ray-Bans from EssilorLuxottica you enjoy, but now they’re topped with precisely tuned compute, sensors and display tech developed by Meta.
Meta also announced the Meta Neural Band, a fabric wrist accessory that detects small movements of your wrist in order to “handwrite” text into the air. The company claims that early users get to about 30 words a minute, and the band can support basic controls like volume adjustments. Both products are being presented as gateways to “agentic AI” assistants that understand scenes, can retrieve information and even take action on your behalf.
Onstage hiccups prompted some hard questions
Live demos got off on the right foot, using a first-person view from the Ray-Ban Display as presenters cued up music and fired off rapid emoji replies. Then things unraveled. The glasses would repeatedly bring up a WhatsApp video call from Meta CTO Andrew Bosworth. The Neural Band did not even pick up the call, which meant Bosworth had to join them onstage.
Another segment demonstrated the new LiveAI cooking assistant walking a host through a recipe for sauce. It never even made it out of the gate. And the glasses kept repeating, “Now that you’ve built your base…” while the user was still asking what to do as a first step. Meta later blamed Wi‑Fi issues, but the glitches spotlighted an all too familiar fragility: if the network stutters and sputters, the magic dies.
To complete the circle, Meta played a pre‑recorded reel of the glasses being used to design a surfboard and order parts — a gleaming vision that didn’t meet messy reality.
Why the $799 price matters for Meta’s smart glasses
The $799 Ray-Ban Display is a far cry from camera‑first smart glasses (like Meta’s previous Ray-Ban models) and still miles below true mixed reality headsets. That middle ground ought to be a fertile one: Light, stylish eyewear that can show you information at a glance is an easier sell than clunky visors. But the demos illustrated a central truth of ambient computing: reliability trumps raw specs. When notifications misfire, or step‑by‑step instructions loop, or calls can’t be taken hands‑free, the premium looks less defensible.
Display brightness is a major headline feature here. Five thousand nits is an aggressive brightness figure; for the record, today’s top smartphones tend to peak around 2,000 or 2,500 nits, and waveguide optics can rob a display of luminance. The challenge of outdoor legibility has thwarted a lot of AR. There are trade‑offs in power and heat. Sustaining such brightness without cooking the wearer or running down batteries is a major engineering challenge — and a crucial test point for reviewers.
The Neural Band bet on solving smart glasses input
Meta’s wristband relies on fine muscle and tendon signals to translate micro‑gestures into text and controls. The approach recalls work that Meta has publicly displayed over the last few years, and works off of the human brain’s preference for handwriting rather than pecking at tiny thumb keyboards. But if it works at scale, it could solve the perennial input problem for glasses — how to interact in a noisy, private or socially sensitive place without using your voice.
But as the keynote showed, input is just one link in a longer chain. The system also has to sense context, interpret intention, retrieve results from the cloud and present human-readable output with low latency. Each hop introduces failure modes. Wi‑Fi is a classic scapegoat at launch events, but it’s also true that so many of these “agentic” features still lean heavily on connectivity.
A category still waiting for its moment to arrive
Smart glasses have had a rocky history. Google discontinued sales of Glass Enterprise in 2023. Snap took a huge write‑down in 2017 when demand for Spectacles did not stick around. Analysts at companies like CCS Insight and IDC have made the argument time and again: Comfort, battery life and clear use cases are more important than flashy demos. Meta’s own camera‑centric Ray-Ban shapes have settled into a niche of hands‑free capture and live streaming; shifting to display‑forward glasses amplifies the pedal on utility and polish.
Privacy remains thorny. Wearable cameras and always‑listening assistants test the limits of acceptable data linkage for authorities in Europe and beyond. Clear recording indicators, thoughtful on-device controls and transparent policies will be crucial if Meta wants acceptance in cafés, classrooms and offices — environments where social norms, not specs, make or break adoption.
Early verdict: promise combines with patience
On paper, the Ray-Ban Display combines three of those ingredients: A bright, legible screen; input that feels natural on your wrist; an AI capable of seeing what you see. Onstage, the recipe separated. None of these stumbles are fatal, and software fixes can come fast, but first impressions are crucial — especially for $799.
The real story should emerge over the next few weeks. Reviews will also assess outdoor readability, battery life, Neural Band accuracy, and how much LiveAI can actually assist with step‑by‑step tasks. If Meta can polish those keynote rough edges into a daily value proposition, the Ray-Ban Display might finally get smart glasses a legitimate everyday use case. If not, well, it’s in danger of becoming yet another cool concept in a segment that has had too many.