AI chatbots are now the fastest way to get product recommendations, but a burgeoning caution from inside the media and tech industry is straightforward and baldly grim: Those confident answers are coming more frequently than ever from marketing outlets, not independent reporting. That’s the crux of the issue raised by Vivek Shah, chief executive officer of Ziff Davis, who contends that consumers can sometimes be pushed toward brand-favorable outcomes when chatbots rely more on promotion rather than vetted reviews.
Shah’s view isn’t anti-AI. He’s publicly bullish on the technology’s potential in business and daily life, despite his company having pushed back against how some of those models use publishers’ content. His concern is more specific and pragmatic: If the system elevates brand pages, sales collateral, and lightly labeled promotional posts to a place alongside your photo captions, then the grand library of digital guidance into which you tap in connection with big-ticket purchases can start to look suspect before you’ve even clicked on a spec sheet.
The Big Risk: Source Quality and Incentives
Current language models are very strong summarizers. But when they do go digging, users may not always prioritize independent testing and investigative tech journalism over slick vendor pages. Brands update their materials regularly, structure this data, and optimize for discoverability — signals that retrieval systems tend to reward. Also, exhaustive product testing is slower and costlier — and sometimes behind a paywall — which could make it less visible to the automated systems.
That imbalance affects outcomes. In spot checks by several tech reporters, various chatbots delivered a selection of vendor materials and publisher reviews in response to queries about whether it’s worth buying a particular pair of smart glasses. Some tools pushed sources front and center; others hid them behind a second click. Because responses can also depend on users and prompts, two people asking the same question might receive different mixes of marketing and journalism.
Part of the credibility gap is in addition to, and aggravated by, well-known modeling problems. Facets of system control are also at issue. Once again, we are confronted with the reality that hallucinations remain a stubborn problem, especially in domains where precise facts are at stake, according to the Stanford AI Index. And when the inputs already skew toward sales copy, the danger is not just mistaken information but confident, well-worded advice that reflects a seller’s incentives more than a buyer’s interests.
How Marketing Infuses Answers in AI Shopping Chats
Several trends are driving more promotional content to the top. Search ecosystems are fighting back against the proliferation of mass-produced, AI-generated pages created to rank, with Google clamping down on “scaled content abuse” in the latest core updates. And yet NewsGuard has identified hundreds of AI-generated sites that ape editorial formats and often provide no such accountability. For such content, that is easy to crawl and has a lot of product keywords stuffed into it, retrieval pipelines can consider this highly relevant.
Meanwhile, affiliate models and soft sponsorships also blur lines across the wider web. The Federal Trade Commission’s Endorsement Guides emphasize that material connections between endorsers and sellers must be clearly disclosed in paid relationships and testimonials, but enforcement is spotty at best in the long tail of the internet. A chatbot that doesn’t differentiate between lightly disclosed advertorials and rigorous lab tests may unwittingly collude with the seller in stacking the deck.
What Smart Shoppers Should Do to Vet AI Advice
- Click the citations. If a recommendation goes heavy on brand blogs, press releases, or retailer listings, consider it a marketing effort, not a ruling. And then ask the chatbot to repeat its answer with no other sources of reporting, and for a clear list.
- Look for methodology. The good reviewers explain how they test — battery cycles, color accuracy, drop tests, longevity checks. One benchmark would be outlets like Consumer Reports; so too are specialty labs that publish repeatable protocols. Double-check at least two separate sources before pulling the trigger.
- Probe for trade-offs. Ask for the good and bad sides, common modes of failure, and known recalls. Request long-term reliability data, warranty terms, and total cost of ownership. A marketing-led answer leads with features; a user-first answer starts with fit, durability, and serviceability.
- Mind the money flow. It’s not that if a page gets a commission on purchases, the review is automatically invalidated — it just should have clear disclosures, and ideally, the testing should speak for itself. Use lack of disclosure or unclear disclosures as a red flag.
Why Publishers and AI Need a Ceasefire on Reviews
Shah’s critique suggests one industry fix: better provenance and licensing. Content standards like C2PA credentials can support models in tracing back to a creator’s origins, and structured metadata may let systems surface independent, authenticated appraisals over marketing. On the business side, licensing clean review data would provide models with less noisy consumer advice, minimizing the temptation to rely on promotional content only because it is plentiful and easy to crawl.
Product research will only become more central to AI. If builders elevate transparent, testing-led sources — and if consumers start to demand proper citations — technology will make buying smarter, not just faster. Until then, the most reasonable assumption is the one Shah accentuates: the answer you see is only as trustworthy as the sources below it.