OpenAI’s annual developer conference doubled down on software and platform depth, marking a significant pivot toward its strategy as ChatGPT graduates from standalone assistant to application platform. No new hardware was announced, but the firm did launch fresh tools for building, shipping, and monetizing AI-driven experiences — such as an Apps SDK, AgentKit for agentic workflows, and beefy model upgrades.
ChatGPT Turns Into an App Platform with Embedded Experiences
The headline change is “talking to apps” within ChatGPT. Instead of hopping between tabs, users can open an app pane, ask in natural language, and receive a result inline — complete with rich UI. In the keynote, OpenAI showed Coursera videos that play inside the chat, Canva instantly generating designs on command, and Zillow’s interactive listings that could be filtered by conversation-activated prompts.
- ChatGPT Turns Into an App Platform with Embedded Experiences
- Apps SDK Comes with MCP Support for Tools and Data
- AgentKit And The Emergence Of Agentic Workflows
- Codex Graduates with Real‑Time Control Across Devices
- New Models and Media Capabilities Arrive in the API
- Why Developers Should Care About These Platform Changes
- What Was Missing from OpenAI’s Software‑First Keynote
It’s more than a plugin revival. Apps can run fullscreen inside ChatGPT, make conversations context-aware, and hand off tasks to other services. For users, that diminishes friction; for developers, it’s a pipeline with distribution and an interface layer built in.
Apps SDK Comes with MCP Support for Tools and Data
OpenAI released the Apps SDK in preview, enabling developers to create full-stack primitives for apps that run within ChatGPT. The Apps SDK also supports Anthropic’s Model Context Protocol, an open standard Anthropic created to help connect models to tools and data sources. That selection suggests the start of an interoperability layer between different AI ecosystems.
Developers will be able to submit apps for review to be listed on the platform later this year, OpenAI said, providing a marketplace path and more of a monetization story than the early days of GPT-based add-ons. For teams that have struggled with bespoke chat UIs and brittle tool integrations, this SDK can normalize a lot of that glue across your application.
AgentKit And The Emergence Of Agentic Workflows
AgentKit is OpenAI’s toolkit for developing, deploying, and evaluating autonomous workflows. It’s built on top of the Responses API and works with a connectors registry, which connects agents to data and third-party systems. Think of it as batteries-included scaffolding for long-running tasks, with instrumentation to know what the agent is doing and why.
In a live demo, an OpenAI product lead built an agentic workflow in under 10 minutes: parsing through a conference schedule, ensuring that sessions were actionable, and then adding some guardrails so it enforces PII policies before releasing.
The point connected — agents are transitioning from research toys to production assets that have observability and policy controls built in, mirroring best practices installed by organizations such as NIST for risk-managed AI deployment.
OpenAI also alluded to ChatKit, a companion toolkit for chat experiences, though the keynote was light on details. Between them, this makes the platform story: interface, orchestration, and evaluation in one place.
Codex Graduates with Real‑Time Control Across Devices
Code Agent Codex from OpenAI is now generally available, following a research preview. The company said usage has increased 10-fold since the start of August, due to platform expansion and model improvements. A live demo went beyond Codex completion, showing Codex wiring an Xbox controller to pan a camera, building a voice assistant to control lights, and auto-generating a credits overlay — an illustration of how code agents can bridge software, hardware, and UX in real time.
The implication for engineering teams is clear: the coding assistant is growing into system composition and device control. That opens doors to robotics, IoT, AV, and creative tooling, in which latency, reliability, and safety checks are as meaningful as syntax.
New Models and Media Capabilities Arrive in the API
OpenAI added GPT-5 Pro to the API, which it claimed would be used where centered thought is essential and further precision is required for harder work. The company also included gpt-realtime-mini, a lightweight model built for low-latency conversational exchanges — such as voice interfaces, live help, and on-device contexts that require quick responses.
Sora 2 is now available over the API — the latest-generation video model. During onstage demos, the focus was on photorealism and consistent motion with different prompts. Opening Sora 2 up to developers will undoubtedly further development of use cases in advertising, education, and previs workflows where controllability and integration with established asset pipelines is a must.
Why Developers Should Care About These Platform Changes
Collectively, these updates transform ChatGPT into a distribution surface, an operating environment, and a runtime for agents. Apps SDK minimizes the effort spent on developing custom chat shells. With AgentKit comes first-party observability and policy. Model updates enhance the depth of reasoning and real-time responsiveness. For startups, that’s a shorter path from idea to shipping; for enterprises, it’s a more transparent means of governing AI in production.
OpenAI also foregrounded evaluation and guardrails as well as capabilities. Platform-level controls here will be at least as important as raw model efficacy in enterprise adoption, and as regulators and standards bodies codify guidance on AI reliability and privacy, platform layers will play a key role in how practitioners apply that guidance.
What Was Missing from OpenAI’s Software‑First Keynote
With a glitzy lineup and over 1,500 attendees, the keynote remained singularly focused on software. Rumored hardware collabs and devices never emerged onstage, reminding developers that the first step to winning a battle for creator attention is making ChatGPT the place where apps, agents, and multimodal models meet.
The throughline is clear: OpenAI wants ChatGPT to be where people do work and where developers follow those people. With the Apps SDK, apps, and App Store-like marketplaces joining AgentKit, Codex, and its fresh set of models in the cosmos Ferryman is trying to conjure up, the SF-based AI company is effectively betting on an AI-native platform over a siloed ragbag of tools.