Study Finds Iran’s State‑Run Disinformation Network Active on X, Instagram, Bluesky and TikTok

Study Finds Iran’s State‑Run Disinformation Network Active on X, Instagram, Bluesky and TikTok

Pulse
PulseMar 29, 2026

Why It Matters

The uncovering of a state‑run disinformation network underscores how modern conflicts are fought not only with missiles but also with narratives that can shape public opinion across borders. By infiltrating mainstream platforms, Iran can influence democratic debates, sway election discourse and destabilize markets, amplifying the strategic impact of its military actions. The findings also expose gaps in current content‑moderation frameworks, where human‑operated campaigns evade automated detection, prompting a reassessment of platform responsibility and international policy. For the media industry, the spread of coordinated falsehoods threatens the credibility of legitimate journalism and complicates the task of fact‑checkers. Advertisers risk brand safety issues when their ads appear alongside manipulated content, while newsrooms must allocate more resources to verify sources in real time. The episode may accelerate calls for industry‑wide standards on political advertising, AI‑generated media disclosure, and cross‑border cooperation to counter state‑sponsored propaganda.

Key Takeaways

  • Clemson researchers identified at least 62 IRGC‑controlled accounts on X, Instagram and Bluesky.
  • Accounts masquerade as Spanish‑speaking users in the U.S. and Latin America, and English speakers in the U.K. and Ireland.
  • Disinformation includes AI‑generated videos, repurposed influencer content, and rapid amplification of Russian‑state TV clips.
  • Platforms have begun takedowns, but human‑run networks remain difficult to detect.
  • The campaign coincided with a 2.3% drop in India’s BSE Sensex, erasing roughly $4.46 trillion in market value.

Pulse Analysis

Iran’s digital offensive marks a maturation of state‑sponsored propaganda that leverages the same platforms that Western democracies rely on for public discourse. Unlike earlier bot‑heavy campaigns, the IRGC’s reliance on real‑person operators allows it to craft context‑specific narratives, mimic local dialects and respond in real time to breaking news. This agility makes the content more persuasive and harder for automated filters to flag, forcing platforms to invest in more nuanced, human‑in‑the‑loop review processes.

Historically, disinformation has been a peripheral tool in Iran’s foreign policy, but the current Gulf conflict has elevated it to a core strategic asset. By sowing doubt about U.S. motives, highlighting alleged Israeli aggression and portraying the war as a Western provocation, Tehran seeks to fracture the coalition supporting Israel and to dampen domestic pressure on allied governments. The timing aligns with market turbulence, suggesting a deliberate attempt to exploit economic anxiety as a vector for influence.

Looking ahead, the convergence of AI‑generated deepfakes and state‑run networks could redefine the threat landscape. Regulators may push for mandatory provenance labeling for political content, while platforms could adopt shared threat‑intelligence feeds to pre‑empt coordinated campaigns. For media companies, the imperative is clear: invest in rapid verification tools, partner with academic labs like Clemson’s Media Forensics Hub, and educate audiences about the hallmarks of state‑sponsored narratives. Failure to adapt could erode trust in the digital news ecosystem and give authoritarian actors a disproportionate voice in shaping global opinion.

Study Finds Iran’s State‑Run Disinformation Network Active on X, Instagram, Bluesky and TikTok

Comments

Want to join the conversation?

Loading comments...