r/ChatGPT power users are complaining about what they say look and feel like ads appearing inside their chats, but OpenAI says the messages aren’t ads. The controversy revolves around “app suggestions” that pop up in the middle of conversations, a design decision that has sowed confusion and frustration, especially among paying users who eXpect an ad-free experience.
Screenshots circulating on X and Reddit depict ChatGPT pushing branded apps or stores in response to queries that users claim have nothing to do with the recommendations offered. In one frequently cited instance, the assistant coaxed a user to experiment with the Peloton app while talking about something that had nothing to do with fitness hardware or training plans.

A member of OpenAI’s data team replied publicly, saying that there is no financial incentive involved in these recommendations and acknowledging the relevant problem. The company has said it’s working on iterating the feature and user experience so as to not confuse users, but it hasn’t dampened frustration among subscribers — some of whom say they’re paying up to $200 per month for advanced access.
What Users Are Seeing Inside Their ChatGPT Conversations
Descriptions point to small, in-line prompts for app installs or a brand-specific answer within what are otherwise plain-vanilla chats. Because the prompts are interspersed with copy created by writers, users have said they look like native ads rather than clearly distinct product recommendations.
Context is the crux. A suggestion to use a meditation app in a sleep hygiene thread might contribute; a suggestion to install a cycling app in a coding question feels like noise. When the relevance falters, the nudge feels promotional to users — regardless of whether money changed hands.
The complaints hint at a larger UX lesson in search and social platforms: the more a suggestion is made to look “native,” the more it has to be tagged, targeted and hammered with precision. Absent that, even well-intentioned counsel is confused with advertising and trust erodes fast.
Are They or Aren’t They Ads? OpenAI Says They Aren’t
OpenAI says they’re not ads. Under that definition — no paid placement, no revenue sharing — it’s true. It is, however, a matter of function, not cost, for end users. When a system mixes brand-determined recommendations into answers, the bar is explicit disclosure and high relevance.
The gray zone here has long concerned U.S. regulators. The FTC advises clear and conspicuous disclosure when promotional material appears similar to editorial content in its native advertising guides. And even if OpenAI’s recos aren’t paid, the same principles of usability apply: clear labeling, consistent placement and clean opt-out.

It’s also a question of data governance. Users would like to understand what are the signals that prompt a suggestion. Is it on the basis of the prompt, a model heuristic, or prior interaction? Exposing targeting logic to the surface, even in broad strokes, is often what makes the difference between ‘smart assist’ and ‘stealth ad.’
The Business Pressures Behind ChatGPT’s UX Choices
Generative AI is not inexpensive to run at scale. OpenAI’s leadership has stated in public that training large models, at large scale, takes over $100 million, and inference on multimodal models involves ongoing GPU-heavy costs. Those economics are forcing the industry to look for alternative revenue lines, such as ads, partnerships and app ecosystems.
There is a precedent throughout the industry. Microsoft’s Copilot and Google’s AI Overviews have experimented with sponsored or commercial results fitted to their conversational formats, sometimes marked with further labeling. Industry reporting has also hinted that OpenAI considered ad concepts, though the timelines and designs are still in motion.
That background helps explain why “not ads” still caused an outcry among users: the slippery slope is apparent to anyone paying attention. If people are already seeing suggestions in core chat flows, some speculate that fully paid placements are not far away.
What OpenAI Could Do Next to Rebuild User Trust
Three moves would make this problem go away in a hurry: raise relevance, add undeniable labels, and put users in charge. An unobtrusive “Suggested by ChatGPT, Not Sponsored” badge, a switch to turn off app suggestions, and visible guardrails — like restricting suggestions to patently commercial queries — would do wonders.
If OpenAI eventually adopts actual paid placements, best practices are well defined: prominent “Sponsored” labels, clean separate visual containers, clear rules about what can be promoted and a public ledger of which ad copy was served when. Enterprise tiers in particular will demand tight controls or default-off for anything that sounds like promotions.
For now, the company says it is iterating. The question will be whether such iterations can rebuild trust without giving up too much utility. In a market with fickle user loyalty and low switching costs, clarity and consent are competitive advantages.