
Fake Moltbot AI Coding Assistant on VS Code Marketplace Drops Malware
Companies Mentioned
Why It Matters
The attack demonstrates how popular AI tools can be weaponized to infiltrate developer environments and expose sensitive credentials, highlighting urgent supply‑chain security concerns for the software industry.
Key Takeaways
- •Fake Moltbot VS Code extension drops remote‑desktop malware
- •Extension auto‑runs, fetches config.json from external server
- •Deploys ScreenConnect client, granting persistent attacker access
- •Moltbot misconfigurations expose API keys and chat histories
- •Users should audit configs, enforce firewalls, monitor C2 traffic
Pulse Analysis
Moltbot, the open‑source AI coding assistant created by Austrian developer Peter Steinberger, has rapidly gained traction among developers, amassing over 85,000 GitHub stars. Its ability to run large language models locally and integrate with messaging platforms such as Slack, Discord, and Microsoft Teams makes it an attractive tool for both hobbyists and enterprises. However, the project's popularity also creates a fertile hunting ground for threat actors. Because Moltbot does not provide an official Visual Studio Code extension, malicious actors published a counterfeit “ClawdBot Agent – AI Coding Assistant” on the official Marketplace, exploiting developers’ trust in the brand to distribute malware.
The counterfeit extension, identified as clawdbot.clawdbot-agent, executes automatically each time VS Code launches. It contacts an external server to download a config.json file, which instructs the extension to retrieve a binary named Code.exe. That binary installs a legitimate‑looking remote‑desktop client, typically ConnectWise ScreenConnect, establishing a persistent foothold on the compromised machine. To ensure reliability, the payload includes fallback mechanisms: a Rust‑compiled DWrite.dll is sideloaded from Dropbox, and a batch script can pull the same components from a secondary domain. All traffic is directed to attacker‑controlled C2 endpoints, enabling silent command‑and‑control and data exfiltration.
The incident underscores a broader supply‑chain risk for AI‑powered developer tools. Researchers have already discovered hundreds of unsecured Moltbot instances exposing API keys, OAuth tokens, and private chat logs, which could be weaponized for credential theft or impersonation attacks. Organizations deploying Moltbot or similar agents should audit configurations, revoke unnecessary service integrations, and enforce outbound network restrictions to block unknown C2 domains. Continuous monitoring for anomalous processes and remote‑desktop connections is essential to detect compromise early. This episode serves as a reminder that rapid adoption of open‑source AI utilities must be matched with rigorous security hygiene to prevent malicious exploitation.
Fake Moltbot AI Coding Assistant on VS Code Marketplace Drops Malware
Comments
Want to join the conversation?
Loading comments...