Google has been testing an AI-based system in Discover that rewrites headlines of publishers, showing short auto-generated titles on the feed until you tap for more. Early examples shared by industry observers indicate the system sometimes alters meaning — but it also raises new questions around accuracy, attribution, and the power of platform intermediaries to make editorial decisions on behalf of news organizations.
What will be changing in Discover’s AI headline tests
The test is visible to selected Google Discover users on both Android and iOS. Instead of the publisher’s native headline, readers are fed an AI-written one — typically a pithy four-word sentence — and only see the actual original version after they open an article. A Google spokesperson called this “testing a new design for news that makes the website easier to scan and right-sizes information so users can find what they’re looking for.”
- What will be changing in Discover’s AI headline tests
- Early mistakes highlight risks to accuracy in Discover
- Why Google may be testing AI headline changes in Discover
- Publisher impact and SEO implications of AI rewrites
- Governance, transparency, and labeling for AI headlines
- What to watch next as AI headline tests evolve

It is a continuation of earlier experiments in which Discover wrote its own summaries of stories to help users decide what to read. The new step goes above and beyond because it recasts the headline itself, the single most important piece of editorial real estate for readers as well as publishers.
Early mistakes highlight risks to accuracy in Discover
Public examples identified by journalists demonstrate the system sometimes generating incorrect facts or oversimplifying them. One AI rewrite proposed a “price revealed” angle for Valve’s Steam Machine when no pricing details were available yet. Another converted a discussion of Baldur’s Gate 3 player behavior into “BG3 players abuse kids,” removing the important context that the “kids” were non-player characters. And in others, unique reporting — such as how a particular Microsoft team is employing AI — was smoothed into something more generic: “Microsoft developers using AI.”
These are not just cosmetic slips. The newsmaker becomes the context and expectation generator, and your memory. Both the Reuters Institute and academic work referred to by Columbia Journalism Review have found that headlines play a huge role in understanding, one that can leave lasting misimpressions when they exaggerate or misframe the content.
Why Google may be testing AI headline changes in Discover
Discover is a massive distribution channel — Google has said in the past that its feed is surfacing content to more than 800 million users on a monthly basis. Organizing headlines into uniform, scannable bites may also increase feed cohesion, decrease truncation, and promote topic discovery from different publishers. It also fits into Google’s bigger thrust to aggregate and centralize information across all products, from AI Overviews in Search to experimental article summaries.
But the trade-offs are significant. With headlines rewritten, editorial control is not operating under a publisher-driven model but by adhering to that relentless platform model of “engagement means consistency” more than it does nuance. Even little distortions can have an outsize influence on trust, click behavior, and the flow of credit to original reporting.

Publisher impact and SEO implications of AI rewrites
Headlines are the quintessential lever for growing an audience: they affect click-through rates, time on page, and shareability. If Discover replaces or demotes original titles, publishers may experience a shift in performance metrics and less distinct differentiation for exclusive angles. The News Media Alliance and other industry groups have long argued that platform-level changes on the presentation front can have a material impact on revenue, and on brand recognition.
Attribution and quality issues remain. Google tells sites to follow E-E-A-T principles, but AI rewrites can strip out signals like specificity, source names, or scope qualifiers that signal expertise. If AI writes the headline and it’s misleading or ambiguous, both the platform and publisher bear reputational risk — without clear lines of sight into how a user would have even found the flawed framing version.
Governance, transparency, and labeling for AI headlines
Regulators are increasingly demanding such transparency in the presentation of algorithms. The EU’s Digital Services Act, for example, nudges platforms toward transparency in ranking and recommendation systems. And if AI-altered headlines were widespread, clear labeling — what was changed, who did it, and why — might become a baseline for following rules and gaining user trust.
Best practices would include:
- Displaying whether a headline is auto-generated by a machine
- Making it easy to find and compare the original headline
- Offering an easy way to report errors in automation
- Providing an appeals process for publishers when a rewrite introduces errors or strips away crucial context
What to watch next as AI headline tests evolve
Google usually iterates fast on UX experiments. Be on the lookout for scope expansion beyond smaller groups, more labels for AI-edited text, and policy changes about when Discover will rely on publisher headlines as opposed to machine-generated rewrites. Also check that the system learns to retain named entities, qualifiers, and exclusivity signals in examples where it failed early on.
For now, the test highlights a fundamental tension in platform-era news: the smoothness of AI-generated presentation versus the editorial scrubbing needed to ensure it is not misleading. Publishers and audiences should be wary unless the accuracy gap closes and becomes more transparent.
