Fake Gemini Npm Package Steals AI Tool Tokens

Fake Gemini Npm Package Steals AI Tool Tokens

GBHackers On Security
GBHackers On SecurityApr 7, 2026

Why It Matters

The attack demonstrates that AI development tools are now high‑value targets in software‑supply‑chain compromises, exposing enterprises to credential theft and unauthorized AI service usage. Mitigating such threats is critical to protect proprietary code and costly AI resources.

Key Takeaways

  • Malicious gemini‑ai‑checker mimics legitimate npm package.
  • Downloads obfuscated backdoor from Vercel endpoint at install.
  • Backdoor exfiltrates AI tool tokens, crypto wallets, credentials.
  • Shares code patterns with DPRK‑linked OtterCookie malware.
  • npm still hosts sibling packages, posing ongoing threat.

Pulse Analysis

Supply‑chain attacks on open‑source ecosystems have escalated, and the gemini‑ai‑checker incident underscores the vulnerability of npm’s trust model. Attackers crafted a package that mirrors a legitimate Node.js project, complete with realistic READMEs and dependency trees, to lure developers seeking quick AI integration. By embedding a post‑install script that silently reaches out to a Vercel‑hosted server, the malicious code bypasses traditional static analysis, delivering a payload that runs entirely in memory. This approach highlights the need for rigorous vetting of third‑party modules, especially those that claim to simplify AI token management.

The retrieved payload is a multi‑module JavaScript RAT that leverages Socket.IO for command‑and‑control, exposing ports for remote access, credential theft, file exfiltration, and clipboard harvesting. Its architecture mirrors the OtterCookie backdoor previously attributed to a North Korean threat group, suggesting a reuse of proven code to target the burgeoning AI developer market. By scanning directories such as .cursor, .claude, and .gemini, the malware harvests high‑value API keys and conversation logs, enabling attackers to consume expensive AI services and potentially exfiltrate proprietary source code. The modular design also allows rapid adaptation to new AI tooling, making it a flexible weapon in the cyber‑espionage toolkit.

Defenders must adopt a layered strategy: enforce outbound network restrictions to untrusted cloud hosts, deploy hunting queries that detect Socket.IO traffic patterns, and treat AI tool configuration folders with the same sensitivity as .ssh or .aws credentials. Continuous monitoring of npm registries for anomalous package uploads, combined with automated post‑install behavior analysis, can curb the spread of such trojanized utilities. As AI assistants become integral to software development pipelines, organizations should prioritize securing the supply chain to prevent credential leakage and safeguard costly AI workloads.

Fake Gemini npm Package Steals AI Tool Tokens

Comments

Want to join the conversation?

Loading comments...