AI-Generated Fakes Fuel Disinformation Surge in Iran War: Explosions, Missiles, and Troops Proven False
By Staff Reporter | March 12, 2026
In the fog of the ongoing war with Iran, social media platforms have become battlegrounds for a new kind of warfare: AI-generated videos and images depicting fake explosions, missiles, and troop movements that have amassed tens of millions of views.[1]
Experts warn that the accessibility of generative AI tools has supercharged disinformation efforts, making it harder than ever for the public to distinguish reality from fabrication during this conflict, now dubbed by some as the “first AI war.”[1][2]
The Rise of High-Quality Fakes
Since the Iran war erupted nearly two weeks ago, a flood of synthetic media has proliferated online. One prominent fake video shows a barrage of Iranian missiles allegedly striking Tel Aviv, Israel, while another depicts panicked crowds fleeing an Iranian attack on a Tel Aviv airport.[1] A third clip falsely portrays captured U.S. special forces personnel held at gunpoint by Iranian troops.[1]
Static images are equally deceptive. Phony photos claim to show a U.S. military base in Iraq and the U.S. Embassy in Saudi Arabia ablaze after Iranian strikes. Others depict Iran’s Supreme Leader Ali Khamenei dead under rubble or Iranians mourning civilian casualties.[1] Even a publication tied to the Iranian government shared a fabricated satellite image of damage to a U.S. base in Bahrain.[1]
Shayan Sardarizadeh, a senior journalist with BBC Verify and a leading debunker of war-related fakes, attributes this surge to the democratization of AI technology. “What has changed in the last year or so is that generative AI has become much more widely accessible,” Sardarizadeh said, noting that these creations are now “very believable” and difficult to spot without expert analysis.[1]

Iran’s Cyber Propaganda Offensive
The disinformation campaign aligns with Iran’s aggressive cyber strategy. As one analyst noted in a recent broadcast, “Iran sees propaganda and disinformation as a very substantial part of its cyber arsenal.”[2] This digital assault has intensified in the days leading up to and since the war’s start, targeting Israel and its allies with a barrage of manipulated content.[2]
AI tools are also complicating fact-checking. Even advanced detectors struggle to identify synthetic media unless they produced it themselves. For instance, Google’s Gemini can spot its own watermark but falters on content from rival models.[2] Compounding the issue, X’s AI chatbot Grok has misled users by affirming the authenticity of some Iran war fakes when queried for verification.[1]
Leadership Rumors and AI Images
Recent developments in Iran’s leadership have spawned additional fakes. Following announcements about Mojtaba Khamenei succeeding as supreme leader, hundreds of AI-generated images of him circulated online, further muddying perceptions of the regime’s stability.[3]
CNN correspondent Leila Gharagozlou highlighted how these images reflect deeper uncertainties in Iran’s power structure amid the conflict.[3]
Expert Tips for Spotting AI Fakes
As the “first AI war” unfolds, media outlets are ramping up efforts to combat misinformation. CNN-News18, for example, has exposed a surge in AI videos amid the Iran-Israel clashes, urging vigilance.[4]
- Check for inconsistencies: Look for unnatural lighting, distorted hands, or flickering shadows in videos.
- Reverse image search: Tools like Google Reverse Image Search can reveal if content is recycled or altered.
- Verify sources: Stick to reputable outlets and cross-reference claims.
- AI watermarks: Emerging standards may help, but they’re not foolproof yet.[2]
All Israel News emphasized that the quality of these fakes is improving rapidly in regions like Israel, eroding public ability to discern truth from lies.[2]
Broader Implications for Global Conflicts
This wave of AI-driven fakes echoes past conflicts but on steroids. In 2022, crude manipulations spread during other wars, but today’s tools enable hyper-realistic deceptions at scale.[1] The result? Eroded trust in visual evidence, potential escalation from misinterpretations, and overwhelmed fact-checkers.
Governments and platforms face mounting pressure to respond. Social media companies must enhance detection algorithms, while international bodies call for AI content labeling mandates. For now, users bear the brunt: in an era where seeing is no longer believing, critical thinking is the best defense.
“It’s now possible to create very believable videos and images appearing to show a significant war incident that is hard to detect to the untrained or naked eye.” – Shayan Sardarizadeh, BBC Verify[1]
What’s Next in the Information War?
As the physical war rages, the digital front shows no signs of abating. With AI evolving daily, experts predict even more sophisticated fakes ahead. Staying informed means questioning everything – especially what looks too dramatic to be true.
This story is developing. Updates will follow as fact-checkers debunk more content and platforms respond.