Amid the chaos that ensued when Charlie Kirk collapsed at a publicly held gathering in Utah, AI chatbots did what they do so often under pressure: They filled the silence with bold indecision. The scramble highlighted an enduring truth about generative AI, a type of machine learning that works by generating things: These systems are designed to spit out plausible answers, not necessarily accurate ones — and nowhere is this more caustically true than in times of misinformation crisis.
Amid conflicting andricocheting messages on social platforms — Was he alive? Was a suspect in custody? conversational bots gave users directly conflicting “facts.” Media watchdogs say the bots weren’t just mistaken in spots; their strength, speed and amplification were actually designed to be persuasive.

How chatbots amplified confusion
NewsGuard collected numerous examples of high-profile AI assistants making mistakes in the wake of the shooting. The account, called @AskPerplexity on X, began by saying that Kirk had died, but then told a user he was alive — changing position in reply to a prompt about gun policy. And Elon Musk’s Grok bot took it further, saying to users an actual video was simply “meme that had been edited” and Kirk would “survive this one easily,” and doubling down that no harm done. In others, users posted AI-generated text summaries that declared that partisan motivation had even been confirmed, at press time, oleh major outlets; it hadn’t — they didn’t report it.
Google’s AI-generated responses also included false claims, such as the charge that Kirk had made a foreign “enemies” list—as an illustration of how misinformation can be propagated by algorithmically composed highlights. The bots effectively laundered rumor into narrative, to lend speculation the patina of authority.
Why LLMs flounder in fast-changing crises
Large language models don’t report; they infer. They don’t phone up local police departments, carefully examine eyewitness videos or wait for the authorities to confirm anything at all. They synthesize from available patterns and err on the side of coherence over circumspection. That’s great for drafting emails or illustrating an idea, but it is fatal in breaking news, at a time when early signals are weak and bad information far outweighs good.
As NewsGuard researchers note, the failure mode of bots changed as they acquired real-time browsing functionality. Rather than refusing to answer, they now reach for whatever is closest at hand and most often left around: social posts, AI-generated content farms and low-engagement sites sown by bad actors. As one theorist said of them, algorithms don’t comment; they bestow authority to another repetition. As the Brennan Center for Justice warns, this dynamic feeds in to what it calls “the liar’s dividend,” where so much misinformation is swirling that partisans can just bat away any authentic evidence they don’t like as fake, and genuine debunks get lost in the churn.
Platforms are wiring AI into news habits
Adding to the risk, AI is being incorporated where people already read headlines; search results, social feeds and even reply threads. According to new research from Pew Research Center, users who use search with AI-generated story summaries are less likely to click through to underlying sources compared with traditional searching, a practice that undermines low-level verification. The Reuters Institute has also shown how consumption of news in platform environments is becoming more passive, with weak cues to provenance and little context.
At the same time, many gatekeepers have reduced human moderation and fact-checking as they’ve outsourced more decisions about trust to community tools and automation. X has tested features that would let chatbots add community notes, an experiment opponents say would muddy the waters between aid and authority. When one bold A.I. summary is blowing everyone up at the top of a feed, one mistake can echo through millions of impressions before corrections arrive.
What ethical AI for news should look like
There are practical guardrails that would make a difference. During fast-moving high-risk events, chatbots should default to what is not known — explicitly labelling it as such — and place prominent signposts that unambiguously point users towards a small set of authoritative, named sources. Online reading should be restricted to vetted domains with provenance metadata, and systems should present citations upfront instead of integrating them into the narrative. Platforms need to limit the speed at which people can generate answers about a live crisis, prioritize updates from official local sources and established newsrooms — and perform public “post-incident” audits when things go wrong.
What should a user know when confronting a chatty bag under this standard? In other news, the newsroom model — reporting, corroboration, editorial check — is still better than probabilistic text. Kirk’s death was a stress test the bots flubbed in familiar ways: privileging speed over accuracy and engagement over authenticity. Not until AI systems are programmed to respect “we don’t know yet” as a legitimate response will they be any good at leading us when facts aren’t settled.