Media Manipulation Conflict Disinformation 18 MIN READ

The Iran War Fake Video Flood: How Games, AI, and Old Footage Rewrote a Real Conflict

100 million+ views of fabricated war footage — from War Thunder clips to AI-generated carrier sinkings — as three state actors weaponize the fog of war

TL;DR — Misleading

The conflict is real. The videos are not.

Operation Epic Fury launched February 28, 2026, with confirmed casualties on all sides. But the specific videos flooding social media — including a Nazi-era video game ship, a 2015 Chinese chemical explosion, an Algerian football celebration, and AI-generated aircraft carrier sinkings — are fabricated, misattributed, or recycled from unrelated events. BBC Verify documented three top AI videos alone accumulating over 100 million combined views. Three state actors (Iran, Israel-backed PRISONBREAK, and Russia's Operation Overload) were simultaneously flooding the information environment from opposite directions — while X's own AI chatbot, Grok, falsely confirmed at least one fake video as real.

Executive Summary

When Operation Epic Fury launched on February 28, 2026, the information war started faster than any boots hit the ground. Within hours, social media platforms were flooded with fabricated, recycled, and AI-generated footage falsely depicting the conflict — a pattern now familiar from Ukraine and Gaza, but dramatically amplified by the accessibility of generative AI tools. BBC Verify documented three top AI-generated videos accumulating over 100 million combined views in under two weeks. [1] NewsGuard found posts garnering more than 21.9 million views that falsely claimed to show Iran gaining military advantage over Israel and the United States. [2] One pro-Iranian account grew its X follower count from 700,000 to 1.4 million — an 85% surge — in under a week by farming engagement from conflict misinformation. [3]

The fake content falls into four distinct categories: recycled footage from unrelated events (a 2015 Tianjin chemical explosion, a 2015 Sharjah apartment fire, a 2020 Algerian football celebration), video game clips from War Thunder and Arma 3 that accumulated over 27 million views combined, custom AI-generated videos using tools like Google Veo and Midjourney, and misattributed genuine footage from earlier conflicts. A War Thunder clip depicting a WWII-era USS Tennessee sinking a Nazi German aircraft was shared by Texas Governor Greg Abbott as real combat footage before deletion. [4] Arma 3 footage purporting to show a US aircraft carrier in flames racked up over 20 million views across X posts. [5]

The structural conditions enabling this flood are novel: Iran's near-total internet blackout — Cloudflare reported a 98% traffic drop on February 28 — created an information vacuum that bad actors from every direction rushed to fill. [6] Unlike previous conflicts where citizen journalists provided ground truth, the near-absence of verified footage from inside Iran meant fabrications faced almost no competition from reality. Meanwhile, X's creator revenue-sharing program financially incentivized premium accounts to post sensational fabricated content, with Grok, X's built-in AI assistant, compounding the damage by falsely confirming at least one fake Tel Aviv video as authentic and citing Reuters, CNN, and Euronews as supposed sources. [7]

Section 1: The Claim — What's Flooding Social Media

In the days following Operation Epic Fury, social media became a firehose of purported war footage — most of it fake. The categories of fabrication documented by fact-checkers across at least a dozen organizations include: video game gameplay presented as live combat, decade-old disaster footage rebranded as current strikes, AI-generated videos indistinguishable to the casual viewer from security camera footage, and state-produced propaganda from Iran's own government broadcasters. [8]

Iran War Fake Video Types (by documented incidents)
12 distinct fake video incidents cataloged by GenuVerity across fact-checker sources. AI-generated content now outnumbers video game footage and recycled archival footage combined.

The scale was documented with unusual precision. The Journal (Ireland) cataloged over 10 individual debunked videos in its fact-check roundup, noting that one clip alone — Arma 3 aircraft carrier footage — had been viewed more than 20 million times across X posts. [5] Euronews and NewsGuard independently counted 21.9 million views on false claims of Iranian military advantage. [7] The European Digital Media Observatory described the Iran-Israel conflict as "the first AI war" — the first major conflict in which AI-generated content reached mass audiences before verified footage could compete. [20]

What makes the 2026 surge qualitatively different from Ukraine 2022 or Gaza 2023 is not just volume, but the combination of three simultaneous conditions: a near-total internet blackout inside Iran, commercially accessible AI video generation tools, and monetization systems on major platforms that financially reward fabrication. Each of these conditions alone was present in prior conflicts. Their convergence is new. [19]

FACT-CHECK — Euronews debunks three fake videos that went viral during the Iran conflict
YouTube
Euronews' The Cube investigates the flood of fake videos — from War Thunder gameplay to AI-generated missile strikes — spreading during the Iran conflict [7]

Section 2: The Catalog — Forensic Breakdown of Each Fake Video

Fake 1: War Thunder WWII Clip — "US Ship Shoots Down Iranian Jet" (7.3M views)

A 21-second clip captioned "An Iranian plane VS a US ship. I could watch this all day" reached 7.3 million views on X after being posted March 1, 2026. Texas Governor Greg Abbott replied "Bye bye" and reshared it to his 1.4 million followers. The clip is gameplay from War Thunder, a military simulation video game. The "warship" is the USS Tennessee, a US dreadnought built in the 1910s, decommissioned in 1947. The "Iranian fighter jet" is a Messerschmitt Me 163B-1a Komet — a Nazi German aircraft from World War II. [4] [10]

Konstantin Govorun, head of PR at Gaijin Entertainment (War Thunder's developer), confirmed to AFP: "Yes, this looks like War Thunder footage." Abbott deleted his post without acknowledgment and subsequently issued a statement endorsing the military action and directing the Texas National Guard to increase port security — pivoting entirely away from the embarrassment. [22]

Fake 2: Tianjin 2015 Chemical Explosion — "This Is Tel Aviv. Thank You Iran!" (2M+ views)

A video captioned "THIS IS TEL AVIV. THANK YOU IRAN!" spread on X March 1, 2026. The footage is from a 2015 chemical warehouse explosion in Tianjin, China, which killed 173 people — including 104 firefighters — and injured 798 more. The same video was previously misused to claim it showed Ukraine in 2022. This is documented as at least the third conflict cycle in which this footage has been weaponized. Fact-checkers identified it by matching it against BBC archival footage from August 2015. [8] [17] [23]

Fake 3: Algeria Football Celebration — "Iranian Missiles Striking Tel Aviv" (4M+ views)

Football fans launching fireworks and flares in Algeria's capital Algiers after CR Belouizdad won the Algerian championship in August 2020 was presented as Iranian missiles striking central Tel Aviv, with red-lit sky and fires across buildings. The same footage was debunked in 2023 when it was falsely claimed to show an Israeli attack on Gaza. Google reverse image search traced the origin to a Berbere Television X post from August 7, 2020, geolocated to Al Mokrani Square, Algiers. [9]

Fake 4: Arma 3 Footage — "US Aircraft Carrier Sinking" (20M+ views)

Multiple X posts showing what appeared to be a US aircraft carrier sinking, with Iranian missiles as the attributed cause, accumulated over 20 million combined views. The footage is from Arma 3, a photorealistic military simulation game. Deck markings do not match the USS Abraham Lincoln. US Central Command stated the ship "was not hit" and missiles "did not even come close." A second Arma 3 clip, captioned "The US has unleashed its powerful F-15 fighter jets in the largest airstrike in modern history" (with Chinese-language text), reached 5+ million additional views. [5] [16]

MISINFO SOURCE — This is the original video game footage that was falsely shared as real war footage
YouTube
The original Arma 3 gameplay video by Battle Dragon (2024) that was later shared as real US airstrike footage, racking up over 5 million views with false captions [16]
Fake 5: Sharjah 2015 Residential Fire — "CIA Dubai Headquarters Hit by Iran" (10M+ views)

An October 1, 2015 fire at an apartment building in the Al Khan area of Sharjah, UAE was shared as evidence of Iran targeting the CIA headquarters in Dubai. The fire injured 19 people and displaced 250 households. There is no CIA headquarters in the depicted building. Notable amplifier: Maram Susli ("Syrian Girl"), a pro-Assad commentator with a large following. Shared across Facebook, Russian-language social media, and X. [11] [12]

Fakes 6–10: AI-Generated Content (millions of additional views)

Tel Aviv ballistic missile strike (security cam): AI-generated video with duplicated building rooftops, unnatural orange smoke coloring, absent sirens, and frame rate inconsistencies. Spread across X, TikTok, Instagram, YouTube, and Douyin. [7]

"Bahrain USS Navy Fleet Destroyed": AI-generated video with people walking through vehicles, cars with three headlights and six wheels, a floating buggy. Shared by Iranian government-linked accounts. [13]

Japanese H3 rocket as Iranian missile launch: Footage of a 2024 Japanese H3 rocket launch from Kagoshima Space Center presented as an Iranian ballistic missile targeting Israel. [14]

Captured US soldiers in Tehran: AI-generated images retaining a visible Google Gemini logo in the image frame — a direct disclosure of AI origin that went unseen by viral sharers. [15]

Khamenei body under rubble: AI-generated images appeared within hours of Khamenei's confirmed death on February 28, exploiting the information vacuum between confirmed death and verified imagery. AFP and Reuters confirmed synthetic origin through visible deformities and SynthID metadata. [3]

Tehran Times satellite imagery: A "before vs. after" image pair claiming to show "completely destroyed" US radar at Al-Udeid Air Base in Qatar. Analyst Tal Hagin identified the "before" image as a Google Earth image from February 2, 2025 — one year before the conflict. AFP's SynthID scan found gibberish coordinates embedded in the fake alongside the visible tell that parked cars appeared in identical positions in both images. [6]

Documented View Counts by Fake Video (millions)
Approximate view counts sourced from TheJournal.ie, PCGamesN, Euronews, AFP, and Erkansaka.net. AI-generated video counts are partial — exact figures unavailable at platform level.
Viral Claim What It Actually Shows Views Fake Type
"US warship shoots down Iranian jet" War Thunder WWII game: USS Tennessee vs. Messerschmitt Me 163B (Nazi Germany) 7.3M Video game footage
"THIS IS TEL AVIV. THANK YOU IRAN!" 2015 Tianjin, China chemical warehouse explosion 2M+ Recycled archival
"Iranian missiles striking center of Tel Aviv" CR Belouizdad football fans, fireworks, Al Mokrani Square, Algiers (2020) 4M+ Recycled archival
"US aircraft carrier sinking from Iranian missiles" Arma 3 military simulation game footage 20M+ Video game footage
"CIA Dubai headquarters hit by Iran" Oct 1, 2015 residential building fire in Sharjah, UAE 10M+ Recycled archival
"Iranian ballistic missiles hitting Tel Aviv (security cam)" AI-generated — duplicated rooftops, unnatural smoke, no sirens Millions AI-generated
"Iran destroys US Navy fleet in Bahrain" AI-generated — people through vehicles, cars with 3 headlights Millions AI-generated
"Iranian missile launch targeting Israel" 2024 Japanese H3 rocket, Kagoshima Space Center Unknown Recycled archival
"Captured US soldiers paraded in Tehran" AI-generated — Google Gemini logo visible in frame Unknown AI-generated
"US radar at Al-Udeid destroyed" (Tehran Times) Manipulated Google Earth image from Feb 2025 + AI-added destruction Unknown AI-generated

Section 3: Who's Behind It — Three State Actor Tracks

ABC News and the AP documented that the Iran conflict disinformation was not merely opportunistic — it was coordinated from three simultaneous directions, each with distinct objectives and audiences. [13]

Track 1: Iran — State Media Fabrication

Iran's state broadcaster IRIB aired footage unrelated to the current war. Tehran Times published the fake "Al-Udeid radar destroyed" satellite imagery within hours of the initial strikes — an operational tempo suggesting pre-prepared materials. Iran's Tasnim news agency published false claims of 650 US troops killed in two days (actual CENTCOM count: 6). Mehr News published claims that 4 ballistic missiles hit the USS Abraham Lincoln (CENTCOM: "did not even come close"). PressTV published Pakistani drone imagery falsely labeled as Israeli operations. [6]

Iran-aligned accounts on X shared the AI-generated Bahrain fleet destruction video, with one account growing from 700,000 to 1.4 million followers in under a week — an 85% surge driven by engagement farming around conflict fabrications. Additionally, Iran's government launched a fake Starlink application designed to surveil citizens and distribute internal disinformation to prevent defection within its own ranks. [28]

Track 2: Israel-Backed "PRISONBREAK" Network

The University of Toronto's Citizen Lab documented an Israeli-backed AI influence operation — designated "PRISONBREAK" — targeting Iranian domestic audiences with regime-change propaganda. More than 50 inauthentic X profiles coordinated across X and supplementary YouTube accounts. [27]

The operation's most striking documented moment: an AI-generated video of an Evin Prison bombing was posted on PRISONBREAK accounts within one hour of actual Israeli airstrikes — strongly suggesting that operators had prior knowledge of the military campaign. Specific AI artifacts identified by Citizen Lab researchers included "duplicated tattoos, missing tattoos, people walking backward, crowds moving as singular sliding units." Tactics also included impersonating BBC Persian and Afkar News outlets and seeding posts into large X communities of 15,000+ members. Attribution confidence per Citizen Lab: "most likely undertaken by an entity of the Israeli government or a private subcontractor working closely with it." [27]

Track 3: Russia's Operation Overload / Matryoshka

Russia's Operation Overload demonstrated a novel two-layer strategy. Layer 1 was direct: fabricated Israeli intelligence warnings telling diaspora Jews in Germany and the US to avoid public spaces — designed to erode trust in Israeli institutions. Layer 2 was parasitic: using Iran's own false victory claims as raw material for separate anti-Ukraine content. Flashpoint Intelligence documented Russia circulating Iranian claims that "Iranian missiles destroyed Ukrainian military bases in Dubai" — linking two unrelated conflicts to reinforce Russia's existing Ukraine narrative. [28]

The Matryoshka operation went further than sharing fake videos — it fabricated entire cloned Euronews broadcast segments using AI voice cloning of actual Euronews journalists to deliver false statements in their authentic voices, distributed through the pro-Kremlin Pravda network and Telegram channels. Specific false claims included reports that "Ukrainian looters" attacked Dubai shops, that 19 Ukrainian detainees coordinated via WhatsApp, and that Armenian PM Pashinyan owned UAE properties worth $170 million. Armenia's press secretary publicly identified the campaign as "a classic FIMI [Foreign Information Manipulation and Interference] mechanism." [31]

Actor Key Activity Primary Evidence Source
Iran / IRIB, Tehran Times, Tasnim, Mehr News Fake satellite imagery, state broadcaster using old footage, falsified casualty counts, fabricated ship sinking claims Euronews Next, Tal Hagin analysis [6]
Israel-aligned "PRISONBREAK" (50+ accounts) AI deepfakes targeting Iranian audiences; regime-change propaganda; BBC Persian impersonation; posted Evin video within 1 hour of strikes Citizen Lab / University of Toronto [27]
Russia / Operation Overload / Matryoshka Fake Israeli intelligence warnings; AI voice-cloned Euronews segments; weaponized Iran fakes for anti-Ukraine content Euronews, Flashpoint Intelligence [28] [31]
Opportunistic non-state actors (X premium accounts) Engagement farming using fake conflict footage; X revenue-sharing monetization Erkansaka.net; X policy response [3]

Section 4: The AI Dimension — Generated Fakes and Grok's Compounding Failure

AI-generated content was the largest single category of documented fakes in the Iran conflict — five distinct incidents compared to four for recycled archival footage and two for video game footage. The tools used — including Google Veo and Midjourney — are commercially available, require no technical expertise, and produce outputs that government officials and professional AI chatbots could not immediately identify as synthetic. [19]

Open-source intelligence analyst Tal Hagin warned in a PolitiFact investigation: "These fabrications are becoming more convincing and harder for seasoned experts to identify." [14] A researcher quoted in Rolling Stone described the broader dynamic: "The volume of AI content is starting to just pollute the information environment in these kinds of crisis settings to a really terrifying degree." [25]

Grok AI: Systematic Failure, Not Isolated Incident

The most documented failure of AI amplifying disinformation was Grok, X's built-in chatbot. When users asked Grok about the AI-generated Tel Aviv ballistic missile video, it responded: "No, this isn't AI, it's a real photo from today's Iranian ballistic missile strikes on central Israel" — falsely citing Reuters, CNN, and Euronews as supposed sources. [7]

RTÉ Primetime's investigation documented that this was not an isolated case but a pattern across at least four distinct query types during the conflict: [30]

  • Glasgow Central Station fire — Grok claimed it was "firefighting efforts in Tel Aviv following an Iranian missile attack," then revised to "a 2024 Tel Aviv residential fire" — both false.
  • Tehran oil drainage canal fires — Grok insisted it was "old 2017 Skirball Fire clips from LA's I-405 freeway."
  • Girls' school strike in Minab, Iran — Grok falsely identified it as "the aftermath of an ISIS attack on a school in Kabul in 2021," contradicting verified NYT/Reuters/Bellingcat reporting.
  • Gaza image — Misidentified as "a Yazidi girl fleeing ISIS in Syria in 2014."

On March 5 — after at least some of these failures were documented — Elon Musk posted: "Use Grok to fact check and ask questions about any post." He did not retract this endorsement. X did not respond to RTÉ's inquiries, and problematic Grok responses were only deleted after user complaints and press publication. [30] [24]

AI images with visible Google Gemini watermarks circulated as "captured US soldiers in Tehran" — the AI logo present in the frame itself, ignored by viral sharers. Tehran Times' fake satellite imagery was exposed by AFP using Google's SynthID detection tool, which found gibberish coordinates embedded in the image metadata alongside the naked-eye tell that parked cars appeared in identical positions in both the "before" and "after" images. [6]

Section 5: Evidence Deep-Dive — Platform Enforcement Gaps and Detection Limitations

X Creator Revenue-Sharing: The Financial Engine of Fabrication

X's head of product Nikita Bier stated that "99%" of accounts spreading AI-generated conflict videos were doing so specifically to "game monetization." The financial structure: X pays approximately $8–12 per million verified user impressions. Eligibility requires 5 million organic impressions in 3 months plus an X Premium subscription. A video achieving 20 million views — like the Arma 3 carrier clip — would generate an estimated $160–$240 in creator revenue per post. [3]

Maldita.es documented the industrialized version of this model on TikTok: 550 coordinated accounts posting 5,800+ AI-generated videos linked to 18 countries, generating 89 million+ combined views. Accounts showed clear coordination — "very similar usernames, identical creation dates, and even the same profile pictures." A source operating multiple accounts disclosed the workflow: temporary email addresses plus Chrome browser to create multiple accounts and monetize engagement from emotional, polarizing, false content. [32]

The enforcement timeline problem is structural. Community Notes typically takes hours to days to apply labels. The median viral lifespan of a fake conflict video peaks within the first 2–6 hours of posting — meaning the majority of cumulative views accrue before any label appears. X announced 90-day suspensions from creator revenue-sharing for undisclosed AI conflict videos on March 3, with permanent bans for repeat offenders. But the enforcement mechanism relies entirely on creator self-disclosure — there is no automated detection for content generated by non-Google AI tools. [7]

SynthID: Powerful but Partial

Google's SynthID AI detection tool was deployed by fact-checkers including AFP, Reuters, and BBC Verify to identify AI-generated images and video during the conflict. It successfully identified the Tehran Times satellite imagery and the Khamenei body images. However, SynthID only detects content generated by Google AI tools. Images generated by Midjourney, Stable Diffusion, RunwayML, Kling, or other non-Google platforms are not covered — creating a systematic blind spot in current detection infrastructure. [6]

Platform Claimed Response What Actually Happened
X (Twitter) 90-day revenue suspension for undisclosed AI conflict videos; permanent ban for repeat offenders (March 3) Grok simultaneously generating misinformation. Community Notes labels arrive after viral peak. "99%" of monetization-driven posts went up before any enforcement.
TikTok Banned 52 accounts posting AI Iranian women soldiers Did not respond to CNN requests for comment on broader fake video spread. 550+ monetization-farming accounts documented by Maldita operated for months.
Meta Removed ~300 accounts in Iranian influence operation Meta Oversight Board documented Meta's failure to flag at least one AI video during 2025 conflict. No public response on 2026 Iran spread. [29]
Google (SynthID) AI detection tool available for fact-checkers Only covers Google AI outputs. Midjourney, Stable Diffusion, RunwayML fakes are undetectable by SynthID.

A previously undocumented category of Iran war AI content: hundreds of synthetic videos depicting women as Iranian soldiers — a fabrication designed to create a false impression of gender integration in Iran's military, which explicitly bars women from combat roles. Combined views reached 10 million+ across platforms. TikTok took specific enforcement action, banning 52 accounts distributing this content. [33]

Section 6: Video Game Footage as Propaganda — Bohemia Interactive's Documented Struggle

The 2026 Iran conflict is the third major conflict in which both War Thunder and Arma 3 footage has been widely shared as authentic combat footage — following Ukraine 2022 and Israel-Hamas 2023. Gamereactor documented that War Thunder, Arma 3, and Call of Duty footage were all circulating as Iran war footage simultaneously in the same news cycle. [16]

Video Game Footage Recycled as War Propaganda: Cross-Conflict Recurrence
Both Arma 3 and War Thunder have been used as fake conflict footage in three separate wars. Source: Gamereactor, Propastop, Bohemia Interactive reporting.

Bohemia Interactive, the Czech studio behind Arma 3, has issued essentially the same statement three times: "It is disheartening for us to see the game we all love being used in this way." Their documented approach is active cooperation with AFP, Reuters, and other fact-checkers as the primary mitigation, rather than technical solutions such as watermarking or output restrictions. [21]

The scale problem is structural: Bohemia previously reported that for every Arma 3 fake news video that gets unpublished, 10 more are uploaded per day. This 10:1 replacement ratio makes platform-level moderation structurally inadequate. Arma 3 footage has now been falsely attributed to Afghanistan, Syria, Palestine, India-Pakistan, Ukraine-Russia, Israel-Hamas, and Iran — an escalating cross-conflict recycling problem that no platform has solved in four years of documented misuse. [21]

The War Thunder incident is particularly instructive. Abbott's non-response — deletion without acknowledgment, followed immediately by a pivot to endorsing the military action — illustrates the political utility of fake war footage even after exposure. The footage functioned as intended regardless of its authenticity: it generated engagement, reinforced a narrative, and prompted a public statement of military support from a sitting governor. [18] [22]

Section 7: Contemporary Context — Why This Is Different From Ukraine 2022

The Iran war disinformation surge is not simply "more of the same." Three conditions in convergence make it qualitatively different from any prior documented conflict.

Condition 1: The Information Vacuum Is Total

Iran's 98% internet traffic collapse (Cloudflare, February 28) is historically rare. In Ukraine, citizen journalists provided continuous ground-truth footage from Day 1. In Gaza, despite restrictions, journalists and aid workers maintained some documentation pipeline. In Iran, the near-complete blackout means fabrications face essentially no competition from verified reality. There is no Ukrainian Telegram channel equivalent providing authentic Iranian ground footage. [26]

Human Rights Watch documented that the shutdown "help[s] conceal large-scale atrocities, contribute[s] to the spread of mis- and disinformation, and unlawfully restrict[s] access to information" — and "severely hampers the work of journalists and human rights monitors, including documentation and reporting on possible laws of war violations by all parties." [26]

Condition 2: Generative AI Has Crossed the Usability Threshold

The tools used for fakes in this conflict — Google Veo, Midjourney, Kling, RunwayML — are commercially available, require no technical expertise, and produce outputs convincing enough that a sitting US state governor and a major platform's own AI chatbot could not immediately identify them as fake. This was not true in Ukraine 2022 — consumer-accessible AI video generation tools did not exist at this quality level. [20]

Condition 3: Financial Incentives Now Run in One Direction

X's creator economy, TikTok's engagement monetization, and YouTube's ad revenue all pay creators more for fake conflict content than for accurate reporting. Accuracy requires investment — research, verification, time. Fabrication is instant. The 550 TikTok accounts, 5,800 videos, and 89 million views documented by Maldita represent a functioning disinformation business model that did not exist at scale in 2022. [32]

Dimension Ukraine 2022 Iran 2026
Video game footage as fakes Yes (Arma 3, War Thunder) Yes (Arma 3, War Thunder, Call of Duty)
AI-generated content scale Minimal — tools not yet consumer-accessible Massive — Veo, Midjourney, Kling all commercial
Internet blackout on conflict side No — Ukraine had open internet; citizen journalists active Yes — Iran 98% dark; no citizen journalism pipeline
State actor directions Russia-backed (primary); some Ukrainian counter-disinfo Iran + Russia + Israel-backed (three simultaneous)
Platform financial incentives Creator revenue-sharing not yet established X, TikTok, YouTube all paying for engagement
AI chatbot amplification No major AI chatbots publicly deployed for fact-checking Grok endorsed by platform owner, actively spread fakes
Tianjin footage recycled Yes (2022, same Tianjin clip as "Ukraine war") Yes (2026, same Tianjin clip as "Iran war") — 3rd conflict

The Tianjin 2015 explosion footage is now documented as having been recycled in at least three separate conflicts over 11 years: China 2015 (original event), Ukraine 2022, and Iran 2026. It is the single most-recycled piece of fake war footage in the modern era — a data point that illustrates the permanent nature of viral fabrications once uploaded. The video was never fully purged from social media after its 2022 debunking, making its 2026 resurgence not just possible but predictable. [17] [8]

Section 8: How to Spot a Fake Iran War Video — Detection Methodology

Across Poynter, PolitiFact, Euronews, AAP FactCheck, and Full Fact, a consistent toolkit emerged for identifying fake conflict footage. [14] [8]

Technique How to Apply Caught In This Report
Reverse image search Screenshot a video frame; run through Google Images or TinEye; check if the image predates February 2026 Algeria football video (2020 origin), Tianjin explosion (2015 origin)
Google Lens / InVID-WeVerify Extract keyframes from video for batch image search Sharjah residential fire (2015 origin)
Google SynthID AI-generated image/video metadata detection — reveals Google AI origin, gibberish coordinate embeds Tehran Times satellite imagery, Khamenei body images
Visual artifact analysis Look for: duplicated building elements, warped limbs, objects with wrong number of wheels/headlights, floating objects, unnatural smoke colors AI Tel Aviv strike (rooftops), Bahrain fleet (cars with 3 headlights), PRISONBREAK (duplicated tattoos)
Audio analysis Suspiciously quiet background, unintelligible voices, absence of expected sounds (sirens, crowd noise) AI Tel Aviv ballistic missile video (absent sirens)
Military hardware identification Match depicted vehicles/aircraft to hardware databases War Thunder clip: USS Tennessee (WWII-era, decommissioned 1947), Messerschmitt Me 163B (Nazi German)
Clip length tells Google's Veo AI generates clips up to 8 seconds; suspicious if video is short clips stitched together Multiple AI-generated strike videos

Conclusion: The Incentive Structures Reward Fabrication

The lesson from War Thunder, Tianjin, and an Algerian football celebration is not simply that people are credulous — it's that the platform incentive structures reward fabrication and punish verification. Fabricating a fake carrier sinking video takes minutes and generates $160–$240 in X creator revenue before any label appears. Verifying authentic footage from inside Iran is impossible when Iran's internet is 98% dark and citizen journalists are offline. [3]

Three state actors operated simultaneously — Iran manufacturing fictional victories for domestic consumption and foreign amplification; an Israeli-backed network deploying AI deepfakes pre-coordinated with the military campaign; Russia parasitically recycling Iran's fake claims to serve an entirely different Ukraine-focused narrative. All three used AI tools that were commercially accessible to anyone. The conflict that mattered in the information environment was not the one in Iran — it was the one running on social media servers in three directions at once. [13] [20]

The Tianjin explosion footage — now debunked for three separate conflicts across 11 years — is the period at the end of this sentence. Fake war footage never fully disappears. It waits for the next war. The 2026 Iran conflict is that war for a dozen pieces of fabricated content simultaneously, amplified by AI tools that generate new fakes faster than fact-checkers can debunk the old ones, on platforms whose financial logic makes the problem not merely persistent but profitable. [8] [19]

Date Event
Feb 28, 2026 Operation Epic Fury launches; Khamenei dies; Cloudflare reports 98% Iran internet traffic drop; Tehran Times posts fake Al-Udeid satellite image within hours
Mar 1, 2026 Tianjin explosion as Tel Aviv; Sharjah fire as CIA Dubai HQ; first AI-generated missile strike videos; War Thunder clip hits 7.3M views; Gov. Abbott shares it captioned "Bye bye"
Mar 2, 2026 Algeria football video recirculated (debunked in 2023) as "Iranian missiles on Tel Aviv" — 4M+ views
Mar 3, 2026 X announces 90-day creator revenue suspension for undisclosed AI conflict videos
Mar 5, 2026 Elon Musk posts: "Use Grok to fact check and ask questions about any post" — endorsing the tool while it is spreading misinformation
Mar 6, 2026 Euronews/The Cube publishes fake video catalog; HRW documents Iran internet shutdown human rights violations; NewsGuard documents 18 false Iranian war claims
Mar 7, 2026 Operation Overload shares false Israeli intelligence warning targeting diaspora Jews in Germany and the US
Mar 8, 2026 ABC News/AP report on state actor coordination; Citizen Lab documents PRISONBREAK network
Mar 10, 2026 RTÉ publishes Grok systematic failure investigation; Euronews documents Matryoshka AI voice-cloning operation
Mar 11, 2026 CNN reports AI-generated content has reached "tens of millions of views"; BBC Verify documents 3 videos exceeding 100M combined views
Mar 12, 2026 PolitiFact publishes identification guide; Tal Hagin warns "fabrications are becoming more convincing"
Mar 13, 2026 One pro-Iranian account reaches 1.4M followers (up from 700K in under a week); misinformation cycle continues