War Game Exercise Demonstrates How Social Media Manipulation Works

War Game Exercise Demonstrates How Social Media Manipulation Works

Dark Reading
Dark ReadingApr 14, 2026

Companies Mentioned

Why It Matters

The experiment proves low‑cost AI tools can shift public sentiment, urging platforms and regulators to strengthen detection of synthetic influence before real elections are affected.

Key Takeaways

  • 270 students from 18 Australian universities played “Capture the Narrative”.
  • AI‑driven bots shifted the simulated election by 1.8 percentage points.
  • Game used 12 LLM instances and 40‑plus bot personality attributes.
  • Participants built influence tools on a $0‑$66 budget, crashing servers.
  • Findings highlight need for platforms to detect AI‑generated fake content at scale.

Pulse Analysis

University of New South Wales turned a classroom exercise into a four‑week war‑game called “Capture the Narrative.” Over 270 participants from 18 Australian universities deployed AI‑driven bots on a custom social‑media sandbox, Legit Social, to sway a simulated South‑Pacific island election. The teams managed to tip the vote by 1.8 percentage points, demonstrating that coordinated synthetic content can meaningfully alter public opinion even in a controlled environment. The scenario mirrors real‑world campaigns where state‑linked bots have attempted to sway referendums and elections in Australia and the United States, making the findings directly applicable to ongoing geopolitical information battles.

The platform was built with a Python back end and React front end, featuring a trending algorithm and chronological feed. Behind the scenes, 12 large‑language‑model instances powered more than 40 personality attributes for each non‑player bot, allowing dynamic belief evolution. NPC bots consumed the feed, performed sentiment analysis, and adjusted their messaging, mirroring the closed‑loop feedback loops used by sophisticated disinformation outfits to maximize engagement. Remarkably, students achieved these results on a $0‑$66 budget, causing server crashes under the volume of generated posts—proof that low‑cost AI tools can produce massive, real‑time influence.

For social‑media companies, the experiment underscores the urgency of deploying scalable detection of AI‑generated misinformation. Regulators and policymakers can use the wargame’s data to craft guidelines that require transparency of synthetic accounts and real‑time content audits. Meanwhile, academia gains a repeatable testbed for cyber‑literacy, enabling students to experience both creation and mitigation of influence operations—a critical skill set as AI‑enhanced disinformation becomes a staple of modern election warfare. By publishing the methodology at Black Hat Asia 2026, UNSW aims to spark industry collaboration on defensive AI tools, encouraging platforms to share threat intelligence and adopt proactive moderation frameworks.

War Game Exercise Demonstrates How Social Media Manipulation Works

Comments

Want to join the conversation?

Loading comments...