Claude Code Leak: Researchers Find First Vulnerability

Claude Code Leak: Researchers Find First Vulnerability

Notebookcheck
NotebookcheckApr 7, 2026

Key Takeaways

  • Source map leak exposed 512k lines of Claude Code
  • Vulnerability bypassed deny rules after 50 subcommands
  • Attackers could exfiltrate SSH and cloud credentials
  • Fixed in version 2.1.90 with parser overhaul
  • Incident highlights need for secure AI tool supply chains

Pulse Analysis

The rapid adoption of AI‑powered coding assistants like Claude Code has introduced new attack surfaces that traditional security tooling often overlooks. While the accidental source‑map release on npm gave researchers a rare glimpse into the agent's internals, it also accelerated the discovery of a subtle permission‑bypass bug. By limiting detailed analysis to 50 subcommands, the system unintentionally created a loophole where longer command chains escaped deny‑rule enforcement, enabling prompt‑injection attacks that could silently siphon private keys from a developer's workstation.

Exploiting this flaw does not require direct access to the model or user data; instead, an adversary can embed a malicious "CLAUDE.md" file in a public repository. When a developer invokes Claude Code to build the project, the AI follows a chain of over 50 benign‑looking commands, allowing the 51st command—often a data‑exfiltration request—to slip past security prompts. The potential impact ranges from compromised SSH keys to unauthorized cloud credential harvesting, posing a tangible threat to enterprises that integrate AI assistants into their development pipelines.

Anthropic's swift response, documented in the 2.1.90 changelog, replaced the fragile parser with a robust one that enforces deny rules irrespective of command length. This incident serves as a cautionary tale for AI tool vendors and users alike: rigorous code review, secure supply‑chain practices, and continuous vulnerability assessments are essential as AI components become embedded in critical workflows. Organizations should adopt defense‑in‑depth strategies, including sandboxed execution environments and strict policy controls, to mitigate similar risks in future AI‑driven development tools.

Claude Code leak: Researchers find first vulnerability

Comments

Want to join the conversation?