OpenAI’s first hardware is edging toward reality, with a senior executive signaling that the long-rumored ChatGPT device is on track for an unveiling in the second half of next year. The company is keeping design and feature details tightly under wraps, but it has hinted that multiple devices are in the works and that commercial availability could follow in the same window, depending on progress.
What We Know About the Timeline for OpenAI’s Device
Speaking during a session hosted by Axios at the World Economic Forum, OpenAI’s chief global affairs officer Chris Lehane said the device program remains “on track” for a late-year debut. He added that the company is still weighing the sales timetable and will decide based on how development advances. Axios also reported that OpenAI executives used the word “devices” in the plural to describe a major push over that same period, suggesting a broader hardware roadmap rather than a one-off launch.
That language matters. Few AI companies have the resources or user reach to make hardware meaningful at scale. OpenAI, which already serves hundreds of millions of monthly ChatGPT visits and enterprise deployments, is positioning its device line as a core pillar rather than an experiment.
What the Device Might Be and Its Likely Design
OpenAI has not confirmed the form factor, but multiple leaks point to an audio-first wearable. One widely cited rumor describes an open-ear design—pill-shaped earpieces that rest behind the ear rather than sealing the canal—paired with an egg-shaped charging case. The device has reportedly carried the codename “Sweetpea” and could target the everyday assistant role now dominated by earbuds and smart glasses.
Speculation also centers on a custom silicon path. Industry chatter has linked the project to a next-generation Exynos platform from Samsung, with manufacturing handled by Foxconn, the contract giant behind iPhones and the Google Pixel line. None of this is finalized publicly, and hardware programs often change course late in development, but the supply chain signals suggest a serious push.
An open-ear approach would be notable. That style, seen in products like Bose’s open-ear buds and certain Sony models, keeps you aware of your surroundings while enabling voice capture. For an AI assistant that thrives on hands-free, always-available interactions—live translation, quick summaries, reminders, contextual search—this design could strike a balance between comfort, safety, and ambient mic performance.
Why Hardware Makes Sense for OpenAI’s Next Steps
OpenAI’s investment in hardware isn’t from scratch. The company previously teamed up with Jony Ive, Apple’s longtime design chief, and later acquired his startup—bringing world-class industrial design and product instincts in-house. That pairing, combined with OpenAI’s rapid advances in multimodal models, points to a device built around natural conversation, vision, and audio, not keyboards and screens.
Technically, the timing aligns with a broader shift toward on-device AI. Chipmakers including Apple, Qualcomm, and Samsung are pushing neural accelerators that enable fast, privacy-sensitive inference on wearables and phones. A purpose-built audio wearable could offload latency-critical tasks locally while tapping the cloud for heavier reasoning, similar to how modern assistants blend edge and server processing.
Competition and Market Context for AI Wearables
Going after the ear is logical—the true wireless earbuds category is massive and sticky. Counterpoint Research has repeatedly ranked Apple’s AirPods as the market leader by shipments and revenue, with rivals like Samsung and Sony trailing. Dethroning that installed base will require standout functionality that AirPods don’t currently deliver, especially in proactive assistance, real-time translation, and note-taking that seamlessly syncs across devices.
There are cautionary tales. Recent “AI gadgets” that tried to replace the phone—like wearable pins and handheld AI companions—struggled with battery life, reliability, and unclear use cases, culminating in tough reviews and even a charging accessory recall in one instance. By contrast, Meta’s latest smart glasses have shown steadier traction by layering conversational AI and camera features onto a familiar form factor with a clear purpose. If OpenAI leans into everyday audio and pragmatic workflows rather than novelty, it will be better positioned.
What to Watch Next as OpenAI’s Device Plans Evolve
Key variables to monitor include whether OpenAI ships one model or a family of devices, how tightly the hardware integrates with iOS and Android, and whether the company opens a developer platform for skills that run on-device. Price and distribution will be just as critical—carrier bundles or enterprise channels could accelerate adoption if the device proves indispensable for meetings, fieldwork, and accessibility.
For now, the signals are clear: OpenAI expects to lift the curtain in the back half of next year, with the possibility of sales following soon after. If the rumors around an audio-first wearable and top-tier manufacturing pan out, the company won’t just be launching a gadget—it will be staking a claim to the next phase of ambient, conversational computing.