A deluge of misrepresented or fabricated videos has spread widely online since the Iran war began last weekend, fueled in part by state-linked propaganda influence campaigns — particularly around who is winning the war and how bad casualties have been
State actors are behind much of the visual misinformation about the Iran warBy MELISSA GOLDINAssociated PressThe Associated Press
As attacks spread after the bombing of Iran by U.S. and Israeli forces, a video circulated widely of crowds peering up at fire, smoke and debris coming from the top of a high-rise building said to be in Bahrain.
Social media users claimed an Iranian attack had hit the skyscraper. But while buildings in Bahrain have been struck by Iranian missiles during the Iran war, this video wasn’t real. It was generated with artificial intelligence and shared by accounts associated with the Iranian government as part of an effort to amplify its successes.
There are multiple clues that the video was not authentic, including two cars on the left side of the clip that appear stuck together and a man in the bottom-right corner whose elbow seems to move straight through a backpack.
A deluge of misrepresented or fabricated videos has spread widely online since the Iran war began last weekend, fueled in part by state-linked propaganda and influence campaigns — particularly around who is winning the war and how many casualties there have been.
“The content that’s coming from state actors tends to be a little better targeted,” said Melanie Smith, senior director of policy and research on information operations at the Institute for Strategic Dialogue. “They have a very clear kind of narrative structure and the videos are just used to support some kind of statement they want to make about the conflict and about the kind of geopolitical situation writ large.”
Pro-Iran social media accounts have adopted a narrative that exaggerates the destruction and death tolls wrought by the country’s military — a position supported by what is being reported in Iranian state media. This has led to a large number of AI-generated videos of supposed air strikes, such as the one of the Bahraini high-rise on fire.
An ongoing Russia-aligned influence operation called Operation Overload, also referred to as Matryoshka or Storm-1679, has been posting videos designed to impersonate intelligence agencies and news outlets, undermining people’s sense of safety in an effort to sway their behavior — a tactic the network has previously used during election cycles. For example, it shared a warning falsely attributed to Israeli intelligence telling Israelis in Germany and the U.S. to be cautious when in public or to not go outside at all.
Iranian censorship confuses matters further
Misrepresented and fabricated videos have been a key feature of other recent conflicts, such as the Russia-Ukraine and Israel-Hamas wars, but experts say a major difference now is the lack of information from the Iranian public due to internet shutdowns and general censorship — a loss of perspectives that could have worked both for and against the Iranian government.
“In Ukraine, that message was so full-throated it really changed the entire dynamic of the conflict because the world really aligned with the perspective of Ukrainians facing the attacks and showing resilience in light of the attacks, but we’re sort of missing that story from Iran,” said Todd Helmus, a senior behavioral scientist at RAND who studies irregular warfare, terrorism and information operations.
In search of clicks, opportunistic social media users not affiliated with state actors have also contributed heavily to the misinformation that has spread during the first days of the Iran war, presenting old footage from other conflicts as recent, sharing video game clips as real and posting their own AI-generated content.
AI, in particular, has helped fuel misinformation in ways that weren’t possible during past conflicts, even just a few years ago. Coupled with state-linked disinformation and censorship, this creates an even wider vacuum in which the truth can get lost.
“The volume of AI content is starting to just pollute the information environment in these kinds of crisis settings to a really terrifying degree,” Smith said. “The inability to get access to verified and credible information in times like this — it’s getting harder and harder to do that.”
Nikita Bier, X’s head of product, wrote in a Tuesday post that the platform will suspend users from its revenue-sharing program if they post AI-generated content from an armed conflict without a proper disclosure. The suspensions are 90 days for a first offense and permanent after that. Emerson Brooking, director of strategy and resident senior fellow at the Atlantic Council’s Digital Forensic Research Lab, warns that social media platforms are now frontlines in war, and that users should be aware of their potential to be used by state actors, even if they are located thousands of miles away from on-the-ground action.
“If you’re in these spaces, just understand that this is an extension of the physical battle space,” he said. “That there are actors on all sides of the conflict that are actively trying to spread propaganda and disinformation to convince you that certain things are true that aren’t. That your eyeballs and your attention are an asset.”
___
Find AP Fact Checks here: https://apnews.com/APFactCheck.


COMMENTS
Please let us know if you're having issues with commenting.