Cybersecurity News and Headlines
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

Cybersecurity Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
CybersecurityNewsChinese Group’s ChatGPT Use Reveals Worldwide Harassment Campaign Against Critics
Chinese Group’s ChatGPT Use Reveals Worldwide Harassment Campaign Against Critics
GovTechDefenseCybersecurity

Chinese Group’s ChatGPT Use Reveals Worldwide Harassment Campaign Against Critics

•February 25, 2026
0
CyberScoop
CyberScoop•Feb 25, 2026

Why It Matters

The episode shows how generative AI can amplify state‑backed influence campaigns, raising urgent security and policy challenges for governments and businesses worldwide.

Key Takeaways

  • •Single Chinese account used ChatGPT for propaganda planning
  • •Operations employed thousands of fake accounts and hundreds of staff
  • •AI tools amplified global harassment of China’s critics online
  • •No evidence ChatGPT facilitated automated cyber‑attack operations
  • •Actors switch between multiple AI models throughout campaigns

Pulse Analysis

OpenAI’s latest threat report shines a light on a Chinese law‑enforcement unit that leveraged ChatGPT to edit internal briefings and to draft a propaganda push against Japan’s prime minister. The single account uploaded dozens of operation reports, revealing a coordinated effort that spans mass posting, bogus complaints, forged documents and even impersonation of U.S. officials. OpenAI estimates the campaign involved hundreds of staff members and thousands of synthetic social‑media profiles, indicating a resource‑intensive, sustained harassment operation aimed at silencing dissent both domestically and abroad.

These findings illustrate how generative AI can act as a force multiplier for state‑backed influence campaigns. Actors are not limited to a single model; the Chinese group paired ChatGPT with domestic systems like DeepSeek to translate, craft narratives, and automate repetitive tasks. While OpenAI found no direct use of ChatGPT for automated hacking, the ease of obtaining public‑domain data and producing convincing content lowers the barrier for large‑scale disinformation. Similar patterns have emerged in Russian‑aligned operations, underscoring a broader trend of AI‑enhanced propaganda across geopolitical rivals.

Policymakers and security teams must now grapple with AI‑driven threat vectors that blend human oversight and machine speed. Robust monitoring of AI‑generated content, cross‑platform attribution, and stricter verification of account authenticity are emerging as essential defenses. As AI models become more accessible, businesses should anticipate heightened phishing and reputation‑damage campaigns, integrating AI‑awareness into their risk frameworks. The OpenAI report serves as a warning that without coordinated safeguards, generative AI will continue to empower malicious actors seeking to silence critics and manipulate public discourse worldwide.

Chinese group’s ChatGPT use reveals worldwide harassment campaign against critics

Read Original Article
0

Comments

Want to join the conversation?

Loading comments...