Cybersecurity News and Headlines
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

Cybersecurity Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
CybersecurityNewsHow AI Image Tools Can Be Tricked Into Making Political Propaganda
How AI Image Tools Can Be Tricked Into Making Political Propaganda
Cybersecurity

How AI Image Tools Can Be Tricked Into Making Political Propaganda

•January 14, 2026
0
Help Net Security
Help Net Security•Jan 14, 2026

Why It Matters

The technique demonstrates a scalable method for AI‑generated disinformation, forcing providers to rethink political safety mechanisms before such tools become mainstream.

Key Takeaways

  • •Multilingual prompts bypass filters up to 86%
  • •Language selection critical; random mixing fails
  • •Strict system instructions block all attempts, also benign
  • •Filters stop direct names but not indirect descriptions
  • •Success consistent across major economies' leaders

Pulse Analysis

The rapid adoption of text‑to‑image generators such as GPT‑4o, GPT‑5 and GPT‑5.1 has opened a new vector for political disinformation. While developers have built layered safety nets that catch overt sexual or violent content, the recent academic benchmark demonstrates that these systems stumble when faced with cleverly crafted prompts. By substituting explicit names and symbols with descriptive profiles and then translating fragments into low‑risk languages, researchers were able to produce convincing propaganda‑style images of world leaders. The experiment revealed bypass rates as high as 86 % on a leading platform, exposing a glaring blind spot in current AI moderation.

The core of the attack lies in two observations. First, keyword‑based filters rely on direct mentions of politicians or extremist icons; when prompts describe a figure’s appearance or a symbol’s historical context, the model still recognises the visual cue while the filter sees no red flag. Second, spreading the description across multiple languages fragments the semantic link between entities, preventing the filter’s language‑specific risk scores from aggregating the full political meaning. This multilingual sharding exploits the uneven political sensitivity of language models, turning otherwise harmless fragments into a coordinated disinformation tool.

Defensive options are emerging but involve steep trade‑offs. Forcing all inputs back into the language most associated with the political subject slashes success to under 20 %, yet it does not eliminate clever workarounds. Introducing a hard system instruction can block every attempt, but it also censors legitimate political queries, undermining user utility. The findings suggest that future safeguards must move beyond surface‑level keyword lists toward deeper semantic reasoning and cross‑lingual context awareness. Policymakers, platform operators, and AI researchers will need to collaborate on standards that balance free expression with the prevention of AI‑driven propaganda.

How AI image tools can be tricked into making political propaganda

Read Original Article
0

Comments

Want to join the conversation?

Loading comments...