Jony Ive says that the smartphone taught the world to swipe — and to keep checking. Now the venerated designer is on a quest to create an A.I. device that changes that habit. Addressing the crowd at OpenAI’s developer conference, Ive described his top-secret preproduction hardware as a curative to the anxiety and distraction that modern phones exude — and hinted at something calm, humane, and “inevitable” in its use.
A Design Reset for the AI Age, Beyond the Smartphone
Ive’s new effort, made with OpenAI, is coming out of the hardware “io” studio he built and brought into OpenAI itself. There are few specific details from the people who devised this vision, but the idea is that it is more than a single gadget. Company statements have referenced a family of devices, and reporting from the Financial Times suggests at least one handheld device without a screen that uses microphones and cameras for context.
That screenless idea isn’t just about aesthetic minimalism. It is a philosophical about-face from the constantly refreshed feeds that define today’s attention economy. Ive is signaling a return to ambient, low-key computing — tools that appear when necessary, and disappear when not — rather than an attention-hustling slab of glass.
The track record matters here. From iMac to iPod to iPhone and Apple Watch, Ive’s greatest hits worked by hiding complexity behind obvious interactions. If the new device hews to that lineage, you can expect fewer modes and settings, information pushed into context-aware assistance tools, and a brutal edit of features that produce any amount of cognitive drag.
Can a No-Screen Device Beat the Smartphone Today?
Many have tried to unseat the phone with hardware that’s “AI-first” — and failed. Humane’s AI Pin and Rabbit’s R1 both also promised natural-language computing and ambient intelligence, but the reviews often complained of latency, fuzzy use cases, and battery drain. The lesson: You can’t out-minimalize a phone unless you bring speed, reliability, and a crystal-clear job to be done.
That partnership with OpenAI could alter the calculus for Ive. The availability of state-of-the-art models also supports online transcription, summarization, and multimodal perception. But that works only if concerns about privacy and responsiveness are solved. You should expect a hybrid approach — on-device processing for wake words, sensor fusion, and basic tasks, while cloud inferencing takes care of the heavier lifting — to ensure interactions remain snappy and respectful of data silos.
The smarter play might be “microinteractions,” not monologues: a tap on the wrist, a spoken confirmation, slide, or glanceable cue to bypass the scroll. Thus, the smart glasses from Meta demonstrate how natural it can feel to capture hands-free and receive discreet prompts. Ive’s design sensibility would perhaps render this trajectory into something more deliberate, less performative, and actually useful.
The Addiction Question and the Evidence So Far
There is substance to Ive’s critique. Data.ai has previously reported that top mobile markets see users spending over five hours a day on their mobile devices. Pew Research Center has noted that nearly half of U.S. teens report being online almost constantly, while many others are online multiple times per day. Research cited by the American Psychological Association has connected high levels of notification exposure to increased stress and compromised sleep. Although the phrase “smartphone addiction” is not a clinical diagnosis, there’s growing evidence that phone overuse can affect our well-being, including by being associated with anxiety.
Design choices drive these outcomes. Infinite scroll, likes, and push alerts are not accidents; they’re engagement levers. If Ive’s device aims to calm overwhelm, it must reverse the defaults for that phone and watch: not bottomless feeds but structured ones quickly exhausted; not more interaction but less — interactions in seconds, not minutes. That’s a business-model decision as much as a design one.
What Success Looks Like for Calmer AI Hardware
It won’t happen in teraflops; it will happen quietly. Can the device help you jot down a thought, translate a sign, or navigate your way to make a decision without having to open a dozen apps? Are you left at the end of the day with fewer notifications and more bandwidth for people or work? The successful recipe will probably combine on-device privacy by default, transparent data controls, and highly adaptive context that narrows rather than broadens choice, while imposing strong limits preventing the product from turning into just another screen.
It’s possible that pricing — and thereby incentives — may be the most difficult design issue.
(Again, one could imagine experiments in which prices are emulated until trustworthy prioritizations are observable.)
Where the engagement-driven revenue falls away for an AI device, it needs a sustainable alternative — such as a hardware premium price, a service subscription, or both. If OpenAI’s models are the engine, cost discipline, energy efficiency, and responsible data handling are features of the product promise.
Ive’s message is blunt: Our relationship with technology is broken, but it can be saved. If Ive’s AI hardware can bring the magic of “it just works,” without the siren’s call to “just one more swipe,” it could redraw the boundary between helpful and overwhelming. The true breakthrough may not be a new interface at all, but rather a product that teaches itself to leap aside.