As U.S. forces trade fire with Iran, an avalanche of falsehoods is swamping social platforms, blurring the line between battlefield updates and fabricated spectacle. Within hours of the first strikes, engagement-hungry accounts pushed mislabeled videos, doctored images, and AI-crafted clips that drew massive reach while credible reporting lagged amid chaos and connectivity blackouts.
Engagement Farming Meets the Fog of War Today
The initial hours of any conflict create an information vacuum. This time, opportunists flooded that void. Flight-simulator footage was passed off as cockpit video. Old missile barrages resurfaced as “live” counterstrikes. Even scenes from unrelated disasters were pressed into service to sell a narrative of tactical dominance. The incentive is blunt: attention equals income, and attention in wartime is easy to spark.
Disinformation researchers describe a familiar playbook upgraded for speed. Accounts built for engagement farming—some ideological, many purely profit-driven—post sensational visuals first, fact-check later (if ever). Confirmation bias does the rest, as audiences grab whatever seems to support their priors and share it before verifiers can catch up.
AI Supercharges Misleading Narratives at Scale
What sets this cycle apart is the scale and polish of AI tooling. The BBC has documented completely AI-generated war videos racking up close to 100 million views across major platforms, often amplified by habitual “super-spreaders.” Investigations by Wired found hundreds of posts on X combining AI-edited visuals with recycled footage to exaggerate Iranian strikes. One viral clip with more than 4 million views showed missiles over a Gulf skyline; it was actually older footage from a different theater. Another post with hundreds of thousands of impressions pushed a fabricated “before-and-after” image tied to a false claim about Ayatollah Ali Khamenei.
AI systems meant to help users sort truth from fiction have stumbled. NewsGuard reported that Google’s AI-powered Search Summaries repeated misleading claims when fed frames from viral war footage, including wrongly contextualizing a high-rise fire as a recent attack. Separately, the BBC found that platform chatbots—such as X’s Grok—incorrectly validated AI-made images of Iranian military movements. The result: errors get laundered through tools people increasingly trust to verify breaking news.
Old Footage, New Lies Drive Wartime Hoaxes Online
Recycled imagery remains the backbone of many hoaxes. NewsGuard tracked a cascade of posts claiming a U.S. carrier had been sunk; the dramatic image was actually the intentional reefing of the decommissioned USS Oriskany, not a current combat loss. U.S. Central Command publicly debunked the rumor, but only after millions had seen it. Another widely shared video purported to show an attack on Israel’s Dimona nuclear facility; community notes later clarified it was footage from a munitions explosion in Ukraine years earlier.
According to NewsGuard, such posts amassed at least 21.9 million views on X alone. Wired noted that many of the highest-velocity uploads came from premium, blue-check accounts—including some tied to state-backed outlets—supercharging reach through algorithmic boosts and follower trust.
Platforms Struggle to Rein In Incentives
Monetization worsens the mess. The promise of payouts for viral posts nudges creators to publish first and verify never. X has updated its revenue-sharing rules, saying it will suspend payments to users who post unlabeled AI content depicting armed conflict. But researchers point out that enforcement is patchy and policy changes often trail the speed of misinformation waves.
Security analysts warn that the information environment itself is now a target. A report from the UK Centre for Emerging Technology and Security cautions that AI-driven deception and amplification threaten public safety and national security when crises unfold. The stakes are especially high when miscasts of troop movements or infrastructure strikes can spark panic or prompt hasty responses.
Why Audiences Are Vulnerable to Wartime Falsehoods
In the crush of breaking news, reliable visuals arrive slowly while rumors travel instantly. NewsGuard’s researchers describe a shrinking gap between events and authentic imagery, a gap filled by impostors. That pressure intensifies when on-the-ground journalists and civilians face shutdowns or throttling. Attempts to route around blackouts with satellite internet help some reporters and activists, but bad actors also slip through, keeping the rumor mill spinning.
How To Navigate The Disinformation Wave Safely
- Scrutinize “too-perfect” clips, especially those with cinematic angles, mismatched weather or skylines, or elements common to video games.
- Check for community notes and look for corroboration from multiple independent outlets or established open-source research groups.
- Treat monetized accounts and newly created “war news” feeds with caution.
- When AI helpers or search summaries appear confident about fast-moving claims, remember their track records are uneven—wait for verifiable details.
The broader truth is stark: during wartime, the platform that shouts loudest can warp reality for millions. Until moderation, monetization, and AI guardrails catch up, vigilance from users—and transparency from platforms—will be the only brakes on a disinformation machine built for speed.