FindArticles FindArticles
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
FindArticlesFindArticles
Font ResizerAa
Search
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
Follow US
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
FindArticles © 2025. All Rights Reserved.
FindArticles > News > Technology

Audit finds AI news summaries often wrong and unreliable

Gregory Zuckerman
Last updated: October 26, 2025 8:50 pm
By Gregory Zuckerman
Technology
7 Min Read
SHARE

If you are relying on artificial intelligence to catch up on the news, a cross-border audit suggests that it’s time for caution. According to a study by the European Broadcasting Union and the BBC, leading machines are failing badly to provide quality news summaries. Forty-five percent of the answers provided came with at least one serious error, while 20 percent of responses included significant levels of inaccuracy (such as made-up facts or outdated information).

The research covered 18 countries and 14 languages, with professional journalists analyzing thousands of AI-generated responses to recent events. The systems tested were ChatGPT, Copilot, Gemini and Perplexity — tools that are increasingly taking on the function once occupied by traditional search and news feeds.

Table of Contents
  • Inside the findings of the cross-border AI news audit
  • Why trust in news is at stake as AI tools gain users
  • Real-world risks are rising as synthetic media proliferates
  • Where AI news summaries go wrong and why it matters
  • How to read AI news responsibly and verify information
  • The bottom line: use AI news as a starting point, not truth
Audit reveals inaccuracies in AI news summaries

Inside the findings of the cross-border AI news audit

Across the board, the chatbots tripped up on basic journalistic fundamentals. Reviewers called out inaccuracies, weak sourcing and a blending of fact with opinion. Gemini was the least reliable, with 76 percent of its answers considered to have serious problems — usually related to inadequate citation and claims that could not be proven.

What went wrong most often? Hallucinations were a repeated failure mode, and so too were summaries that smugly depicted passé facts as though they were fresh out of the oven. On multilingual tests, the errors were systemic, not isolated, which indicates that the problem is not unique to English-language edge cases but a more fundamental limitation of how these models make sense of and compress fast-moving news.

The report’s warning is blunt: When AI misuses news this often, public trust wanes. As the EBU leadership noted, the vulnerabilities are pan-national and systemic, and they risk steering audiences toward cynicism — undermining participation and faith in democratic systems.

Why trust in news is at stake as AI tools gain users

AI tools for producing stories are already becoming an entry point into current events, particularly among younger generations. Fewer than 10 percent of people now use AI tools to keep up with the news, according to the Reuters Institute’s Digital News Report 2025, rising to more than one in seven among those under the age of 25. But here’s the disconnect: Three-quarters of U.S. adults, according to a Pew Research Center survey, say they never get news from chatbots.

And even where search includes AI, people often don’t fact-check what they read. Studies of Google’s AI Overviews show limited trust but also low levels of clicking through to sources. This mix of low verification and high convenience is precisely where factual errors are most likely to get passed around.

Real-world risks are rising as synthetic media proliferates

These are not academic concerns. Generative video tools like OpenAI’s Sora have demonstrated how believable fake footage can look, from photorealistic imagery of battles that never took place to images of public figures who never consented. Watermarks can be removed; context can be amputated; and the age-old adage “seeing is believing” no longer applies.

The ChatGPT logo and text on a professional flat design background with soft patterns and gradients, resized to a 16:9 aspect ratio.

Throw in social platforms that are engineered for engagement, rather than accuracy, and the information environment becomes a tinderbox. It’s not as if AI creates polarization just by showing up, but it supercharges the trends that have already fragmented audiences and rewarded sensationalism over careful laying out of information.

Where AI news summaries go wrong and why it matters

Big language models can be very good at pattern recognition; they are not quite as adept at live fact-checking. They do so by compressing knowledge from large training data (which can be out of date), and have difficulty differentiating updated evidence from stale context unless recall and sourcing are comprehensive. When asked for confident, brief takeaways by prompts, models can go too far in stating certainty and too light on the caveats — precisely the inverse of how to do good journalism.

Sourcing is another weak link. Without documentation, readers can’t track claims to the source material. The reviewers of the study repeatedly flagged this gap, which erodes accountability and makes it more difficult for readers to assess credibility.

How to read AI news responsibly and verify information

AI is a stepping-off point, not the last word. Click through to the original source of the news, favor organizations that post corrections and detailed methodology, and pay attention to timestamps. Ask AI tools for sources, and then vet those sources on your own.

Newsrooms are testing AI-powered tools to improve workflow efficiency, but top outlets and standards bodies emphasize human oversight. And The Associated Press, for example, does not recommend using generative AI to generate full-fledged publishable news stories without strong editorial oversight. On the technology side, efforts like the Coalition for Content Provenance and Authenticity are advocating for tamper-resistant media provenance to allow audiences to ensure that what they’re seeing is credible.

The bottom line: use AI news as a starting point, not truth

AI can assist you in taking a scan of the headlines, but the audit shows it to be an unreliable primary source.

There were substantial problems in 45 percent of responses evaluated, and 20 percent were very wrong, so the risk is clear. Take AI news summaries not as truth, but as suggestions about lines of inquiry — follow the links, read the reporting, and let verifiable evidence, not confidently worded prose, lead your understanding.

Gregory Zuckerman
ByGregory Zuckerman
Gregory Zuckerman is a veteran investigative journalist and financial writer with decades of experience covering global markets, investment strategies, and the business personalities shaping them. His writing blends deep reporting with narrative storytelling to uncover the hidden forces behind financial trends and innovations. Over the years, Gregory’s work has earned industry recognition for bringing clarity to complex financial topics, and he continues to focus on long-form journalism that explores hedge funds, private equity, and high-stakes investing.
Latest News
Weekend Tech Deals Soar On Laptops Speakers TVs And More
IKEA Launches Smartphone Bed To End Doomscrolling
Mobile scanner and PDF editor deal for $24.99
Big Firms in Tech Announce Thousands of Layoffs as of 2025
No Defense For Partner Apps On Future Devices
Best Buy Just Dropped A Ton Of Crazy Black Friday 2025 Deals
Samsung Galaxy S25 Edge 512GB Drops by $490
HBO Max Premieres Welcome to Derry This Week
Disney Plus And Hulu Betting On Virgins And Star Wars
Return YouTube Dislike Faces Ads Backlash
HDD And SSD Price Drop With New Deals Swirling Around
TCL 85 Inch TV Slashed 45% In Massive Best Buy Sale
FindArticles
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
  • Corrections Policy
  • Diversity & Inclusion Statement
  • Diversity in Our Team
  • Editorial Guidelines
  • Feedback & Editorial Contact Policy
FindArticles © 2025. All Rights Reserved.