As the U.S. wages war with Iran, a second battle is raging online. Within hours of the first salvos, fabricated videos, mislabeled images, and AI-edited clips flooded social feeds, overwhelming users and complicating independent reporting from the ground. The incentive structure is clear: attention equals money and influence, and both are flowing to accounts that post first and fact-check never.
What begins as rumor now scales instantly. EXperts tracking the information space say engagement-chasing posts are reaching nine-figure view counts in days, turning the “fog of war” into a business model powered by bots, algorithmic boosts, and generative AI.
The Disinformation Machine Finds A Battlefield
In the immediate aftermath of reported strikes, including a blast at Iran’s Shajareh Tayyebeh school that local reports said killed up to 168 people, feeds were inundated with out-of-context footage and outright fabrications. A Wired investigation cataloged hundreds of misleading posts on X, many shared within minutes of real-world explosions. A clip with more than 4 million views showing missiles “over Dubai” was actually from an earlier attack on Tel Aviv, while a viral before-and-after image claiming to show the compound of Ali Hosseini Khamenei was digitally concocted and still drew hundreds of thousands of impressions.
Crucially, Wired found that most of these posts came from premium, blue-check accounts — including state-funded Iranian outlets — giving misinformation a veneer of legitimacy and additional algorithmic reach.
AI Supercharges Engagement Farming On War Footage
Old tactics have been upgraded. As in earlier conflicts, video game footage was repackaged as combat clips — think flight-simulator scenes miscaptioned as downed F-35s — but this time AI editing and synthetic generation have made the fakes cleaner and faster to produce. The BBC reported such fabrications coursing through TikTok, including assets tied to known Russian influence networks, and tallied nearly 100 million views across a handful of AI-generated war videos amplified by notorious “super-spreader” accounts.
The motive is not always geopolitical. Creators are cashing in on virality. That dynamic grew significant enough that X updated its rules, saying it will suspend users from its Creator Revenue Sharing program if they post unlabeled AI war content. Moderation is playing catch-up with technology — and with the financial incentives that reward speed over accuracy.
Falsehoods With Real-World Targets And Impact
NewsGuard’s analysts documented a surge of posts inflating Iran’s counterstrikes and spreading invented battlefield “wins.” One striking example: an image touted as proof that the USS Abraham Lincoln was sinking in the Arabian Sea. U.S. Central Command knocked down the claim, and investigators traced the photo to the deliberate sinking of the USS Oriskany nearly two decades ago. The fake still reached millions after prominent accounts — including an elected official from Kenya — shared it.
Another widely shared video alleged a strike on Israel’s Dimona nuclear facility; community fact-checks later tied the footage to a 2017 attack in Balaklia, Ukraine. NewsGuard estimates such miscaptioned posts have already drawn at least 21.9 million views on X, illustrating how quickly recycled visuals can be weaponized in a new context.
Search And Chatbots Compound The Confusion
As users seek fast verification, AI assistants and search summaries are becoming part of the problem. Investigations found X’s Grok bot confidently endorsing fabricated images of Iranian military moves. NewsGuard separately reported that Google’s AI-powered Search Summaries echoed misleading claims when users ran reverse image checks — including describing a 2015 residential fire in Sharjah as fresh evidence of a “CIA outpost” attack amid regional tensions.
Journalists warn that these tools, trained for probabilistic answers rather than verified reporting, perform worst during breaking crises. The result is a feedback loop: viral fakes prompt rushed queries, AI systems fill gaps with authoritative-sounding summaries, and misinformation gains another coat of credibility.
Why The Fakes Stick And Spread During Crises
Researchers note that the window between an event and authentic visuals has narrowed, testing public patience and amplifying confirmation bias. Sofia Rubinson of NewsGuard’s Reality Check team says anonymous “conflict” accounts exploit this void, posting dramatic but dubious content that larger influencers then launder to mainstream feeds. Distance from the battlefield further lowers the bar for believability, especially when slick visuals arrive before verified reporting.
The stakes are not abstract. A UK Centre for Emerging Technology and Security report warns that AI-driven information threats can erode crisis response, inflame public fear, and distort democratic decision-making. And with intermittent internet access inside Iran and neighboring areas, on-the-ground verification is harder, giving propagandists and engagement farmers more room to operate.
What To Watch For Now As Conflict Misinformation Spikes
Expect more recycled footage, simulated “ops” videos, and AI-generated scenes labeled as exclusive leaks. Treat battlefield “firsts” with skepticism; look for corroboration from multiple independent outlets and official statements that can be cross-checked. Community notes and OSINT researchers can help, but they are not infallible. Above all, beware of accounts that post constantly across crises, monetize virality, and rarely correct errors — those are the engines driving the current surge.