OpenAI is now turning ChatGPT into a platform, with native apps that can be called up and run directly within the chatbot — and opening a preview of an Apps SDK to developers. Some of what’s coming down the pike from Booking.com, Expedia, Spotify, Figma, Coursera, Zillow and Canva bring interactive services directly into everyday conversations. This dynamic change in the way we communicate and work together via messaging positions ChatGPT as not a standalone assistant but as where work happens.
The move is not a wholly new concept — Twitter has been experimenting with this through experiments like the GPT Store — but it removes one layer of friction by bringing third-party tools inline with replies. OpenAI has always positioned ChatGPT as a productivity and learning assistant. Native apps bring that pitch to life with the ability to have a model summon, show, or control real software as if to get the job done.
How ChatGPT apps work inside chats with interactive tools
Users can call apps in natural language — like “Figma, turn this sketch into a diagram” or “Coursera, help me learn the basics of machine learning.” And in a real estate demo, requesting apartments within a price range caused an interactive map from Zillow to materialize inside the chat. The interface itself is rich, hosting video that can be pinned and adjusted on the fly, and multi-step flows where the assistant orchestrates the UI.
ChatGPT also suggests useful apps if the intent is obvious. Request a weekend party playlist, and the assistant may pop open Spotify on its own; soon-to-be partners like DoorDash and Instacart or Uber and AllTrails will let you take quick actions from ordering food to planning a trip right in the conversation without juggling multiple apps.
Accounts and subscriptions carry over. If you are already paying for a service, then you can log into it within ChatGPT to unlock your premium features. That continuity is critical: users find their data and settings where they expect to see them, and apps benefit from engagement instead of a parallel, cut-rate experience.
What developers get from the new ChatGPT Apps SDK
The Apps SDK provides a framework for developers to specify capabilities, UI components, and actions which ChatGPT can use. OpenAI, under the hood, is leveraging a technology it calls the Model Context Protocol (MCP) to allow applications to connect to data sources and tools in a standardized, secure manner. They can read the context from the assistant, perform operations, and generate interactive outputs right in the chat pane.
Crucially, distribution is baked in. Instead of hoping users find a separate app store, developers can serve demand at the point of intent. We’ve previously reported 100 million-plus weekly active users for ChatGPT; even relatively light visits in high-intent conversations could provide impactful trial and conversion for both SaaS vendors and consumer services.
Monetization is on the roadmap. OpenAI says apps can charge using methods like Instant Checkout within ChatGPT, thus paving the way to offer paid features, usage tiers, or one-off purchases. That turns the surface from a discovery channel into a transactional one.
Privacy and ranking questions for in-chat third-party apps
Launching a platform within a chatbot poses predictable but critical questions. How much of a conversation can third-party apps observe — the entire thread, a few recent turns, just the lines that were triggered? OpenAI says developers will be limited to data collection that’s “necessary and proportionate” as well as transparent about permissions, though the specifics of scopes and retention defaults will matter especially for enterprises with strict compliance requirements.
Another open question is that of ranking and neutrality. When there are multiple services that can satisfy the same request — for example, picking between delivery apps — how does ChatGPT pick what to surface first? OpenAI has said it will focus on user experience, but any combination of relevance signals, user preference, and potential for paid placement will require careful governance so as not to look like pay-to-play.
A bet on chat-native software and mini-app ecosystems
Strategically, this is OpenAI’s WeChat-moment ambition: mini-apps that live where users already spend time. The approach resembles ecosystems such as Slack apps, Salesforce’s AppExchange, and extensions for Microsoft Copilot and Google’s Gemini, but with a twist — the primary interface is conversation, not menus or buttons. That means you feel like you’re following guided flows, not hopping around from app to app — whether planning a trip, designing a document, or selecting which courses to take.
If successful, “apps inside chat” could reroute user journeys that today originate in search engines or on mobile home screens. For developers, the calculus changes: optimize for prompts and assistant reasoning, rather than purchase onboarding funnels. “It’s about users doing fewer tabs but more results — tell the assistant that, and we’ll kind of bring in the pieces,” he added.
What to watch next as ChatGPT apps roll out to users
Three signs will tell us if this platform works.
- Latency: rich, multi-step apps need to feel instant, or else users will bounce.
- Conversion and re-engagement: Do in-chat trials get converted into customers, and are they reused with partners?
- Controls: anticipate enterprise admins will need to see detailed data scopes and audit logs, along with permissioning, before rolling out widely.
OpenAI is providing ChatGPT with the tools to become more than an answer engine. Now, with native apps, the assistant plans, shows, and transacts. It will come down to the specifics — privacy defaults, ranking transparency, and developer economics — that will shape trust. But the direction is clear: day-to-day software’s center of gravity is shifting into the chat window.