Nothing CEO Carl Pei believes the classic app-centric smartphone is nearing its sunset. Speaking at SXSW, Pei argued that intelligent agents able to understand intent and act across services will make tapping icons feel archaic — and he warned founders whose value sits solely inside an app that disruption is coming whether they like it or not.
Pei’s vision shifts the phone from a launcher of apps to a launcher of outcomes. Instead of hopping between screens and logins, an AI agent would infer goals, coordinate the right services, and deliver results — no manual orchestration required. He framed today’s mobile UX as a holdover from the pre-iPhone era and said the industry has confused thinner bezels for true progress.
- Why Pei Thinks Traditional Smartphone Apps Are Obsolete
- From Commands to Deeper Intent Understanding on Phones
- The Agent Interface, Not the App Icon, Should Dominate
- What It Takes for AI Agents to Reliably Work at Scale
- Early Lessons and Ongoing Skepticism from Agent Devices
- What This Means For Startups And Platforms
This push toward an “AI-first” device has been central to Nothing’s roadmap and investor narrative, including a Series C raise reportedly totaling $200 million. Pei acknowledged apps won’t vanish overnight, but contends that the foundation of the next mobile era is an agent layer, not an icon grid.
Why Pei Thinks Traditional Smartphone Apps Are Obsolete
Most real-world tasks are cross‑app by nature. Planning coffee with a friend typically spans messaging, maps, ride‑hailing, and calendar — four contexts, four silos, four chances to drop the thread. Data.ai’s State of Mobile reports that people use dozens of apps each month, yet the bulk of time concentrates in a small handful, a sign that app switching remains a tax users tolerate rather than enjoy.
Agents flip that model. Instead of users breaking tasks into steps, the system takes an intent like “grab coffee near the office at noon with Maya,” checks schedules, suggests a venue, reserves a table, and coordinates transport — quietly brokering between services with user-approved permissions.
From Commands to Deeper Intent Understanding on Phones
Pei distinguishes between basic command execution and deeper intent modeling. The former is today’s familiar “book me a flight” assistant. The latter remembers long‑term preferences, anticipated constraints, and evolving goals — closer to a personalized chief of staff than a voice remote. Features like ChatGPT’s memory and Google’s work on multimodal context are early proof points for this shift.
Delivering that safely requires a persistent personal model with strong privacy boundaries. On‑device AI is a key enabler here. Apple’s silicon, Google’s Tensor chips, and Qualcomm’s latest Snapdragon platforms emphasize neural processing that can keep sensitive inference local, cutting latency and reliance on the cloud while improving trust.
The Agent Interface, Not the App Icon, Should Dominate
Pei argues agents should not clumsily mimic human taps. Instead, services need machine‑readable capabilities designed for agents: structured APIs that advertise “what can be done” and “under what constraints.” Think of Android Intents, iOS Shortcuts, or OpenAI’s function‑calling as precursors — early scaffolding for capability discovery and safe execution.
If this layer matures, discovery moves from app stores to capability catalogs. Search results evolve from links to actions. The economic center of gravity shifts too: fees could migrate from in‑app purchases to orchestration and transaction settlement. Gartner has highlighted the rise of agentic platforms that broker tasks across tools, hinting at how enterprise and consumer ecosystems may converge.
What It Takes for AI Agents to Reliably Work at Scale
Reliability is the make‑or‑break metric. For payments, reservations, and travel, near‑flawless execution is table stakes. That demands tool‑use planning, verifiable outcomes, and graceful fallbacks when ambiguity is high. Vendors are racing to harden these ingredients with planning algorithms, sandboxed tool calls, and audit trails that let users see and undo what agents did.
Hardware matters as much as software. Modern mobile chips combine high‑throughput NPUs with efficient CPUs and GPUs, enabling on‑device large language models, vector search, and multimodal perception. Pair that with encrypted data stores and per‑capability permissions, and agents can act decisively without phoning home for every step.
Early Lessons and Ongoing Skepticism from Agent Devices
Recent “agent-first” gadgets offer cautionary tales. Devices like Humane’s AI Pin and Rabbit’s R1 promised to sidestep apps, but stumbled on latency, limited integrations, and brittleness. The takeaway isn’t that agents won’t work — it’s that they need deep, cooperative hooks into services and a phone‑class reliability bar before mainstream users will switch.
Even big‑tech experiments illustrate the hurdle. Voice assistants once let users order rides or meals hands‑free, but usage waned as fragile integrations met shifting incentives. To succeed, the next wave must align developer economics with agent execution, not compete with it.
What This Means For Startups And Platforms
For builders, the message is clear: design for capabilities, not just screens. Publish agent‑safe actions with scopes and rate limits, return structured results with receipts, and log every operation for user review. Treat consent as a first‑class feature. Where visual clarity matters, offer lightweight UIs as fallbacks — but assume an agent is in the driver’s seat.
Pei isn’t declaring the app grid dead tomorrow. Nothing still ships an OS where people can make and use mini apps, a practical acknowledgment of the transition underway. But his bet is that the winning phones of the next cycle will excel at intention capture, not icon tapping — and that the services thriving on them will be the ones that agents can understand and trust.