Google is quietly increasing the use of AI-generated headlines in Discover’s “trending topic” cards, broadening an experiment that replaces human-written titles with machine-crafted ones. The company says the change boosts user satisfaction, but the rollout is already drawing criticism as awkward or inaccurate AI titles appear above stories that publishers did not write or approve.
What Is Changing in Discover's Trending Topic Cards
Discover’s trending topic units compile coverage from multiple outlets into a single card. Instead of selecting one of the original headlines, Google’s system synthesizes information across sources and generates its own title, then presents it with a large image and icons from several publishers.

These cards can be identified by a few tells. You’ll see up to three outlet icons with text like “Outlet Name +11” at the top and no Follow button in the corner. Tapping the AI-crafted headline opens an AI-generated summary page, while tapping the image typically sends you to one publisher’s article. Tapping the “Outlet +X” label reveals a list of original stories with the human-written headlines intact.
Because the card looks very similar to a standard Discover story, many readers assume the headline came from the linked outlet. That design choice is fueling confusion when the AI gets things wrong.
Why AI-Generated Headlines in Discover Are Backfiring
AI summarization often struggles with nuance. In news, nuance is everything. Small phrasing errors can flip meanings—turning a rumor into a confirmation, a delay into a cancellation, or a context-specific claim into a universal fact. Reporters at The Verge say Google plans to continue applying these AI titles to trending topics, citing internal claims of improved satisfaction, yet many examples circulating on social media show clumsy composites that misstate the core of a story.
Attribution is another problem. The card borrows imagery from one publisher, stacks logos from others, and then inserts a machine-written headline. When that line misrepresents the piece, blame tends to fall on the outlet pictured—damaging hard-won credibility and sparking reader complaints to the wrong newsroom.
Broader trust dynamics are at play. The Reuters Institute has documented that audiences increasingly encounter news via platform-controlled feeds, and trust in news is fragile in these environments. If AI headlines distort or oversimplify, they add friction at a moment when publishers and platforms are already under scrutiny for accuracy and accountability.
Impacts on Publishers and SEO from AI Headlines
Discover is a major mobile traffic source for many outlets. When AI rewrites a headline, it can influence click-through rates, skew reader expectations, and invite bounce-backs when the article doesn’t match the machine’s framing. That dynamic is especially risky for sensitive beats—public health, policy, or financial news—where precise language matters.

There’s also a branding challenge. Newsrooms invest heavily in headlines that balance accuracy, context, and voice. Replacing those titles erodes editorial control and can muddy signals of expertise that Google says it values in its own guidance on experience, expertise, authoritativeness, and trustworthiness.
Several industry analysts, including researchers at Nieman Lab and Columbia Journalism Review, have warned that AI-driven presentation layers can inadvertently reward sensational or ambiguous wording. Even if the underlying article is rigorous, a machine-summarized hook may tilt toward virality over clarity.
How to Spot and Verify AI-Titled Cards in Discover
There are quick checks readers can use to avoid confusion. Look for multiple outlet icons and the absence of a Follow button—that usually indicates a trending topic card. If the headline sounds off, tap the “Outlet +X” label to see the original headlines before clicking through. When possible, open two or three sources to compare details, especially for developing stories or contentious topics.
If you land on a story that doesn’t match the Discover headline, it’s likely the publisher did not write the AI title. Consider reporting the card through in-app feedback and judging the article on its own merits.
What Google Should Do Next to Fix AI Headlines
Transparency needs to improve. Clearer labels that say “AI-Generated Headline” on the card, side-by-side display of the top publisher’s original headline, and standardized tap targets—headline to summary, image to summary, outlet link to article—would reduce misattribution.
Google could also offer publishers controls: an opt-out from AI titles for specific topics, stronger provenance signals (author, outlet, publish time, and source count), and visible citations within summaries. Aligning Discover’s presentation with the company’s own responsible AI principles would help rebuild trust.
AI can be useful for clustering coverage and surfacing diverse sources, but headlines are journalism’s sharp edge. When a machine blunts or bends that edge, readers and publishers pay the price. If AI-titled cards are here to stay, they need guardrails that put accuracy and attribution first.
