The video spread at a velocity only social media can generate. Crowds of people screaming and running in panic — captioned "Israelis fleeing as Iran's missiles arrive." Jackson Hinkle, one of the most-followed anti-establishment influencers on X, shared it. It racked up nearly 5 million views. The problem: the footage showed Louis Tomlinson fans near the Four Seasons Hotel in Buenos Aires, Argentina. Not a single Israeli was in frame.
Fact-checkers at Fact Crescendo caught it. The debunk circulated. Hinkle trended anyway — this time for the correction, not the original post. As of 4 AM Central on April 6, he has been among the top trending topics on X for 15 consecutive hours.
This is not an isolated incident. It is the defining pattern of the Iran war's information environment: fabricated content that goes viral at scale, overwhelms fact-checking infrastructure, and continues influencing perception even after being debunked. The Buenos Aires video is one documented example. NewsGuard has now tracked 50 distinct false claims in the first 37 days of the conflict. The New York Times identified more than 110 AI-generated images and videos in the first two weeks alone. And a single Iranian state-linked deepfake campaign generated over 145 million views inside a few days.
Who Is Jackson Hinkle — and Why Does He Keep Trending?
Jackson Hinkle is a 26-year-old political commentator and host of "Legitimate Targets with Jackson Hinkle," a podcast and social media presence with millions of followers across X, YouTube, and other platforms. He operates in an ideological space difficult to label — critics call him a "red-brown" influencer who blends far-left anti-imperialism with far-right nationalist rhetoric, producing content that appeals to anti-establishment audiences on both ends.
His posts have been repeatedly cited by Russian and Iranian state-affiliated media. Russia's Lenta.ru used a Hinkle headline to describe the 2023 Ukrainian counteroffensive as a "suicide mission." During the Israel-Hamas conflict, he was deplatformed from YouTube.
Since the Iran war began on February 28, Hinkle has become one of the most active sources of contested Iran war content on X. In the first days of the war, he shared footage that CENTCOM denied. He has posted videos later traced to video game simulators. The Buenos Aires incident is simply the most recent and most clearly documented example of his content amplifying false footage.
What makes Hinkle significant isn't his individual posts — it's his reach. Each false claim he shares moves through a network of millions of followers before debunks arrive. And X's algorithmic amplification does not deprioritize engagement from misinformation; it rewards it.
The Scale of the Deepfake War: What's Been Documented
The Iran war is the first major conflict in which AI-generated video has become a genuine front-line weapon — not in some future scenario, but right now, measurably, at industrial scale.
The numbers from three independent research organizations tell the story:
- NewsGuard has tracked 50 distinct false claims about the Iran war in its first 25 days — an average of two verified false narratives per day, with the volume still climbing as of publication.
- The New York Times identified more than 110 AI-generated or AI-manipulated images and videos in just the first two weeks.
- Cyabra, a social media research firm, documented a pro-Iran campaign deploying tens of thousands of fake accounts that generated over 145 million views in days — primarily through AI deepfakes portraying Iran as winning the military conflict.
- The Institute for Strategic Dialogue (ISD), which shared analysis with WIRED, documented a pro-regime propaganda network using AI-generated posts depicting Orthodox Jews leading American soldiers to war and celebrating American deaths.
The examples span every type of digital fabrication:
AI-fabricated video: Videos of the USS Abraham Lincoln carrier ablaze and sinking spread so convincingly that, according to Deadline reporting, President Trump called his generals to verify whether the carrier had been hit. It had not. U.S. Central Command was unambiguous: "The Lincoln was not hit. The missiles launched didn't even come close." Analysis using the AI detection tool Hive found approximately 99.9% of the content in those videos was AI-generated.
Recycled real footage, wrong caption: AFP fact-checkers caught images of burning vehicles in Tel Aviv that actually showed the January 2026 protests in Tehran. Snopes debunked a "new" Iranian strike on Tel Aviv as footage from June 2025. And the Buenos Aires-to-Israel misidentification that sent Hinkle trending this week.
Video game footage: A fake video of an Iranian missile destroying a U.S. fighter jet, traced by BBC Verify to a military flight simulator, accumulated 70 million views in a single weekend before being taken down.
Delta Force capture fakes: AI images of members of Delta Force allegedly captured by Iranian authorities were viewed over 5 million times before deletion. Images of a high-rise building in Bahrain on fire were shared by Iranian officials and state media — the building was not on fire.
Both Sides Are Playing This Game
The disinformation campaign is not solely Iranian. The information battlefield is contested by all parties — and the U.S. government is not an innocent bystander.
The White House posted approximately a dozen "hype videos" to X and TikTok in the first weeks of the conflict — montages that wove together Call of Duty killstreaks, Iron Man clips, Top Gun footage, and actual strike videos. Wikipedia's article on media coverage of the 2026 Iran war notes the White House faced "backlash over a video that mixed real footage with video game clips."
Marc Owen Jones, associate professor of media analytics at Northwestern University in Qatar and a specialist in how social media and disinformation influence public opinion, described the American approach to Euronews as "videos intercut with Hollywood clips, a sort of memeification of communication designed to appeal to a far-right aesthetic that rejects empathy in favour of humiliation."
On the Iranian side, IRGC spokesman Ali Mohammad Naini claimed 650 American troops were killed or wounded in the conflict's first two days. CENTCOM confirmed six had actually been killed. Iranian state broadcaster IRIB TV1 was documented airing fabricated footage — in one instance using muted video of an Israeli attack on Iran while narrating a story about Iran striking Israel.
Chinese state media also circulated a fake image claiming Iraqi resistance had downed a U.S. KC-135 refueling aircraft. A Clemson University study published in late March found IRGC-linked accounts flooding X, Instagram, and Bluesky with AI-generated videos — including deepfakes mocking President Trump styled after Lego movies — reaching millions of viewers.
Why Grok Is Making It Worse
X's own AI chatbot, Grok, has compounded the problem in a way that is structurally different from user error: it is an algorithmic fact-checking failure built into the platform itself.
When disinformation researcher Tal Hagin asked Grok to verify a post about Iranian missiles allegedly striking Tel Aviv, Grok misidentified the location and date of the video — which had been shared by Iranian state-owned media Press TV — and then tried to prove its point by generating and sharing its own AI image of destruction. "Now Grok is replying with AI slop of destruction," Hagin wrote. "Cooked I tell you."
The pattern repeated with Netanyahu. When the Israeli prime minister posted videos to rebut viral claims of his death, Grok declared the footage was AI-generated — a claim that was itself immediately debunked, but had already spread further before corrections arrived.
The structure of the problem is circular and self-reinforcing: AI generates the fakes, AI "verifies" the fakes, and the platform's engagement algorithms reward the fakes before corrections can catch up. Truth has no structural entry point.
The Speed Problem: Why Debunks Don't Work Fast Enough
The fundamental challenge facing fact-checkers is not accuracy — they are often right, and fast. The challenge is speed asymmetry.
"In a fast-moving conflict, verified information is often delayed, which creates a vacuum that misinformation fills immediately," Marc Owen Jones told Euronews. "When people are worried, they crave information, but that information is often false."
A fake video can reach millions of viewers within minutes of posting. A thorough debunk, even published hours later, reaches only a fraction of the original audience. Most people who saw the false content never see the correction.
Steven Feldstein, a senior fellow at the Carnegie Endowment for International Peace, describes an evolution toward what he calls the "shallow fake" — manipulating what is real rather than fabricating outright, because manipulation is harder to detect and the threshold for public doubt is lower. You don't need to convince someone a fake is real; you only need to make them uncertain about what is real.
"The advent of gen AI propaganda and the further erosion of trust in gatekeeping institutions make it even more difficult to combat the spread of industrial-level fabricated information," Feldstein told Deadline.
Alex Hamerstone, Advisory Solutions Director at TrustedSec, put the technology barrier in concrete terms: "Content can be created instantly, and the types of fake videos that would have taken highly trained people working with expensive software just a few years ago can now be created by anyone with a cell phone and a free app."
What the Record Shows
The documented facts of the Iran war information environment, without editorializing:
- NewsGuard has verified 50 false claims in 37 days — averaging 2 per day, increasing over time
- The New York Times identified 110+ AI-fabricated images and videos in the first 14 days
- A single pro-Iran deepfake network generated 145 million views in days (Cyabra)
- A fake fighter jet video traced to a military simulator: 70 million views in one weekend (BBC Verify)
- AI images of Delta Force allegedly captured: 5 million views before deletion
- The USS Abraham Lincoln was not struck; fake videos of its sinking were convincing enough to prompt a presidential call to generals
- Jackson Hinkle's Buenos Aires-labeled-as-Israel video: nearly 5 million views (Fact Crescendo)
- X's Grok AI chatbot has repeatedly "verified" false content as real and generated new AI images in responses to fact-check requests
- Both the U.S. White House and Iranian state media produced documented disinformation in the first weeks of the conflict
The information war is not a side effect of the military conflict. At 145 million views for a single campaign, it may be reaching more people than any missile.
The question is not whether you have seen disinformation about the Iran war. The question is how much of what you believe you saw was real.