OpenAI’s big developer conference is drifting by on the perfect kind of hype for a really big hardware announcement later today, and all indications are that it will be tied to an AI-powered wearable. An upcoming chat with superstar designer Jony Ive, long rumored to be involved with OpenAI’s hardware skunkworks, has focused expectations. The company is also capitalizing on the stage to expand ChatGPT from a single interface into a platform, laying the groundwork for software that might one day run a hands-free device.
But beyond the buzz, the showcase is front-loaded with product depth. They’re also taking stock of how apps, agents, and new APIs all fit together—architecture that ought to be needed on day one for an AI wearable to get off the ground. The event is attracting a larger crowd than has turned out in the past, complete with media in the room and more than 1,500 people attending at Fort Mason in San Francisco, an indication of the stakes.
Keynote Highlights and OpenAI Product Pillars
Leadership of OpenAI then outlined four pillars: apps inside ChatGPT, building agents, writing code, and API updates. The throughline seems to be to turn ChatGPT into a runtime where third-party software lives, with agents that coordinate work across data and services. It is a strategic move away from being a strong chatbot to one that is programmable.
The cadence is on the heels of a recent product wave, including a Sora 2 generative video model, a revamped iOS app, and an “Instant Checkout” feature for agentic e-commerce. Shipping them ahead of the event sets a new mark for what’s being reserved for the stage—especially in hardware.
ChatGPT Gets Apps via New SDK for In-Chat Tools
OpenAI introduced an Apps SDK that enables developers to create complete applications that work inside ChatGPT, instead of bolt-on plug-ins. The SDK leverages the Model Context Protocol, a community standard led by Anthropic, for secure model-to-tool and data source connectivity—and without needing bespoke integrations for every service.
In live demos, an OpenAI engineer dragged a Coursera video into the chat and asked follow-up questions about the lecture, then switched to create a poster in Canva—asking for multiple versions of the design, descriptions of what’s happening, and full-screen editing without leaving ChatGPT.
A Zillow view followed shortly after, where natural-language filtering took the place of UI sliders to help narrow housing results.
The showstopper was orchestration: a pitch deck that spun up from the poster brief while other jobs were running, highlighting how the interface can manage multiple workflows.
The SDK is only available as a preview, with app submission opening for additional review—a first sign of a curated marketplace and developer monetization.
Why an AI Wearable Fits OpenAI’s Playbook
When ChatGPT can host apps and agents, a wearable starts to make all kinds of sense. Voice-first interactions, camera-assisted comprehension, and glanceable answers require a software substrate that routes queries to the proper tool, manages permissions, and retains context. That’s exactly what the SDK and agent stack are doing.
The Financial Times had previously reported on discussions between OpenAI’s Sam Altman and Jony Ive’s LoveFrom over a consumer gadget, with exploratory talks involving SoftBank’s Masayoshi Son.
The specifics are yet unknown, but the design language around that trio suggests a high-end, human-first device that solves for always-available AI without needing your phone.
The category has learned the hard way. Humane’s AI Pin called attention to the battery, heat, and latency trade-offs of cloud-heavy hardware, and Meta’s Ray-Ban smart glasses reminded us how multimodal AI can feel genuinely useful when it’s fast, nonintrusive, and socially acceptable. OpenAI’s leverage would then be deep control over the model, a rapidly expanding app ecosystem, and agents doing behind-the-scenes busywork.
Agent Builder and the OpenAI Developer API Roadmap
OpenAI is also previewing an Agent Builder for building task-oriented assistants that can plan, call tools, and take action under guardrails. Look for fine-grained rule sets around capabilities, data paths, and human-in-the-loop checkpoints—imperative for enterprise deployments and consumer confidence.
This jibes with the larger industry move: with Google folding agents into its cloud stack, Microsoft’s Autogen framework in full bloom, and AWS even pushing out Bedrock agents. OpenAI’s spin? Closer integration with ChatGPT and the new Apps SDK, and MCP-based interoperability to lower friction when connecting to databases, CRMs, calendars, and commerce platforms.
From the API side, developers should notice better reliability of code generation and tool use, as well as lower-latency endpoints—important for real-time voice and vision experiences that something you wear on your face would rely on.
What This Means for Developers and Everyday Users
At another developer conference, OpenAI announced that it had millions of builders and a large percentage of Fortune 500 companies tinkering with its stack. That scale seeds an app economy inside ChatGPT, where usage can leap the chasm between desktop and pocket to face or lapel the instant a wearable comes into view.
For developers, the near-term opportunity is more straightforward: build app-like experiences that leverage conversational context and agentic execution—with a chance you might be able to distribute through a curated listing.
For users, it means the assistant can not only respond to questions but carry out tasks from across services—booking, buying, creating, and learning—without tab thrashing.
Regardless of whether the hardware takes a bow on stage, the software groundwork is clear. OpenAI is making ChatGPT a platform that may slide from the browser into something you wear—and if/when the wearable arrives, it already has an ecosystem of apps and agents waiting in its virtual wings.