OpenAI defends its ChatGPT experiment after users suggested unsolicited app prompts that were, in fact, quite ad-like. The flare-up started after ChatGPT recommended that the user install the Peloton app during an unrelated conversation, leading users to be concerned about advertising making its way into a paid AI product. OpenAI said that the prompt was “from a new app discovery feature, not an ad,” but the incident demonstrated how skittish users are when anything seems like it could be commercial intrusion inside of an AI assistant.
User anger at in-chat app prompts sparks backlash
The drama started when Hyperbolic co-founder Yuchen Jin shared a screenshot on X showing ChatGPT suggesting the Peloton app while conversing in the app. The post was quick to attract attention, garnering hundreds of reshares and almost 462,000 views because the suggestion was irrelevant to the topic of conversation and presented within a premium environment. Jin said he pays for the $200-a-month Pro Plan, a tier in which users are promised no ads and minimal distractions.

Others shared similar stories, like one who wrote that ChatGPT continued to return results related to Spotify despite the fact they were a paid subscriber to Apple Music. The general gripe wasn’t that the assistant was bad at targeting, but that it seemed to be pressing its owner’s branded services without a good reason and with no way for people to turn off the nudges.
OpenAI Clarifies: Discovery, Not Advertising
OpenAI’s data lead for ChatGPT, Daniel McAuley, responded to the thread publicly that “the Peloton prompt is not an ad and has no financial component.” He referred to it as a recommendation to help users find apps that they can launch directly from within ChatGPT. He also admitted that the recommendation was a “bad/confusing experience” as it didn’t relate to the conversation.
The company has been testing app integrations for people to call services like Booking.com, poster designs from Canva, a course invite link from Coursera, and an apartment listing on Zillow directly in the chat. The feature will be available for logged-in users outside the EU, Switzerland, and the U.K. Though OpenAI says it’s iterating on relevance and user experience, critics of the feature point out that there isn’t currently a setting to globally turn off such prompts, increasing the appearance of intrusiveness.
Why the recommendations seemed like ads to users
Even without a financial angle, optics count. A suggestion with a brand on it that is unasked for, aimed at a paid commercial service, and gets in the way of a conversation looks like an ad to a lot of people — especially when inside a paid product. Tech companies have faced similar backlash when interface changes complicated the distinction between organic results and ads. Regulators have long said that clear labeling and context are critical so people can differentiate between editorial content and advertising messages.
The Federal Trade Commission’s guidance on advertising and endorsements in the U.S. emphasizes clear, conspicuous disclosures when there is a commercial relationship. There is no such relationship here, OpenAI says. Yet the agency’s broader message is applicable: presentation and placement can deceive, and perceived advertising in premium environments commands outsized reputational risks.

Trust and the AI platform gamble for app discovery
And the stakes go well beyond a cringe-worthy Peloton prompt. OpenAI hopes that ChatGPT can become a sort of meta-platform, where people discover and use third-party services without the hassle of ever leaving the chat. And that vision lives or dies by trust. And if suggestions feel pushy, irrelevant, or commercial, users could fall back to more “model-only” interactions — or swap to a competing topic modeling assistant like Google’s Gemini, or Anthropic’s Claude, which is somewhat more conservative about exploration.
Design choices will be decisive. Transparent controls to turn off suggestions, stronger relevance thresholds, and an upfront explanation of how and when apps are suggested could realign what people expect. So too could a “just-in-time” consent flow — asking users if they wish to activate discovery the first time it is triggered, rather than turning it on by default. Internally, teams generally track metrics such as opt-out rates, session interruption, and user satisfaction to make sure that suggestions are doing more good than harm.
What OpenAI apps will do next to improve discovery
OpenAI claims it is working to improve the recommendation system as well as the user experience more generally. Look for three cues:
- A user-facing switch for prompt flow
- Clearer language around discovery
- Snugger contextual matching so that recommendations surface only when obviously useful
Another thing to keep an eye on is how platform partners will respond; if app developers see no engagement or backlash, the ecosystem will develop more nervously.
For now, the episode serves as a stark sign that in conversational AI, there’s a fine line between helpful and promotional. Relevance, timing, and consent make all the difference because even before money changes hands, an aide can feel like a trusted guide or a sales rep. Since OpenAI has yet to prove its app suggestions serve the former purpose, it risks losing users — these days, people vote with their clicks much more quickly.