Google has started testing AI-written headlines in its Discover feed, subtly replacing publishers’ original headlines with machine-generated versions that are often incorrect. The early examples to emerge among tech reporters show misleading summaries, context-free summaries, or ones whittled down into awkward four-word clickbait — causing consternation among editors as well as readers.
What some users report seeing in Google Discover right now
Reporters at The Verge noticed Discover tiles that featured AI versions of breaking news in place of the original headlines. In a pair of examples, one nuanced PC Gamer story about some oddity in Baldur’s Gate 3 was blunted to the point that it became “BG3 players exploit children,” removing crucial context, and with it, sense. A subtle 9to5Google article about Qi2 charging, for example, turned into “Qi2 slows down older Pixels,” an overreaching assertion not backed up by the actual post. One story from Ars Technica was rewritten to read “Steam Machine price revealed,” although no price was actually revealed.
- What some users report seeing in Google Discover right now
- Why AI-generated titles in feeds often get the facts and tone wrong
- Lack of clear labels for AI headlines raises serious trust questions
- Google says the AI headline test is a small, limited experiment
- Publishers seek control and accuracy over AI headline rewrites
- The real danger of AI headline rewrites for news audiences
- Bottom line: unlabeled AI headlines risk misleading readers
These aren’t just bad rewrites. They frequently veer into factually dubious or suggestive territory in ways that misrepresent the underlying reporting. The template is the same: short, punchy, and shareable — but often wrong.
Why AI-generated titles in feeds often get the facts and tone wrong
Headline writing is everything at the crossroads of precision, subtlety, and curiosity. Generative systems trained to condense and maximize for clicks have a hard time respecting those bounds, particularly if stripped of full article context or misunderstanding tone. The effect is a bias toward brevity and salience — names, numbers, outraged verbs — at the cost of precision.
There’s also a structural issue: a lot of the models, by default, produce generic-style outputs rather than source-centric ones — “X does Y” constructions with less emphasis on caveats or source attribution. In a feed like Discover, where a headline may often be the only signal that a user gets to see prior to tapping, such a failure mode can literally turn editorial nuance into noise.
Lack of clear labels for AI headlines raises serious trust questions
The test looks to push AI headlines without a clear label, making it difficult for users to determine the wording was machine-modified.
That omission matters. And if an AI-generated headline is misleading, a reader often holds the publisher whose name runs right beside the headline responsible. For newsrooms that obsess over “hed” precision, that’s brand damage they did not select.
Trust is already fragile. The Reuters Institute’s newest Digital News Report puts average trust in news at around 40% — so no AI mediation is taking us closer toward that ideal, and plenty of unlabeled AIs running the filter shop risk pushing that lower. Regulators are also homing in: consumer protection agencies have indicated that ambiguous AI labeling can be misleading, especially when it alters the sense of editorial content.

Google says the AI headline test is a small, limited experiment
A Google spokeswoman described the change as a “small UI experiment,” suggesting that it was limited testing, not broad deployment. There was no specific timetable for wider availability and no mention of particular safeguards, publisher controls, or labeling requirements. Even still, the choice to roll out such a delicate feature live in the wild highlights how hungry the company is to push AI summarization further up its surfaces.
It comes after previous AI-generated summaries in Discover, and the company’s standalone AI Overviews within Search, have drawn criticism for their accuracy and how they present information. The headline test indicates Google may still be testing the boundaries of how much it believes it can rewrite third-party journalism inside its own UI.
Publishers seek control and accuracy over AI headline rewrites
News outlets are looking to Google surfaces more than ever for visibility. Providers of analytics services, such as Chartbeat and Parse.ly, have charted that Discover can be a major producer of mobile traffic on par with Search, in some cases, for certain categories. That turns headline integrity from an editorial issue into a business one.
The suggestions from industry are simple: keep the original headline, by default; clearly denote any intervention of AI-driven writing; and provide a robust opt-out method — whether through structured data, robots directives, or a Search Console option — so publishers can avoid automated authorship. Without those safeguards, a single overenthusiastic rewrite can undo hours of careful reporting and fact-checking.
The real danger of AI headline rewrites for news audiences
Bad AI headlines not only irritate editors — they can mislead the public, at scale. A sensational four-word summary can kick up social outrage, do reputational damage, or precipitate policy debates on a foundation the story never laid. And because feed readers skim, many will never see anything other than the headline — none of the nuance the article itself delivers.
There’s a path forward. A.I. may help with variant testing and accessibility descriptions, but only under hard guardrails: no meaning changes, transparent labeling, the publisher’s line blessed as the canonical headline. Anything less is a distortion engine, not a discovery feed.
Bottom line: unlabeled AI headlines risk misleading readers
Google’s Discover experiment offers a reminder of how fast A.I. can jump the line when it decides to “optimize” journalism. Until AI-generated headlines can copy the precision and intent of original human-authored titles — and until users know exactly what was made by machine — substitution of publisher titles will remain a trust-undermining gamble.