
The abuse threatens data privacy, corporate IP, and regulatory compliance, prompting browsers and security teams to reassess extension vetting.
The rapid adoption of generative AI has turned browser extensions into a convenient gateway for users seeking on‑the‑fly assistance. Developers flood the Chrome Web Store with tools branded as “ChatGPT,” “Gemini AI,” or generic “AI Assistant,” promising instant summarization or translation. LayerX’s investigation revealed a coordinated campaign of thirty near‑identical extensions that have collectively attracted more than 260,000 installs. Their superficial branding, four‑star reviews, and even the store’s “Featured” badge give them an air of legitimacy that many users accept without verification.
Under the polished UI, each extension loads a full‑screen iframe pointing to a remote server controlled by the attacker. When a user submits a prompt, the text—often containing confidential emails, API keys, or proprietary documents—is routed through the attacker’s backend, which may proxy a legitimate large‑language‑model API to generate a believable response. Meanwhile, the same payload is harvested and stored for later exploitation. Because the extension’s local code requests minimal permissions and the malicious logic resides off‑device, static analysis and Chrome’s current review mechanisms frequently miss the threat.
The fallout extends beyond individual privacy breaches; enterprises risk inadvertent data leakage, regulatory violations, and the erosion of intellectual property safeguards. Security teams should treat any AI‑enabled extension as a potential data conduit, enforcing strict allow‑lists and monitoring outbound traffic for unknown endpoints. Google, for its part, must deepen network‑level scrutiny and correlate code fingerprints across submissions to spot copy‑cat campaigns. As AI tools become embedded in daily workflows, a proactive, zero‑trust stance toward third‑party extensions will be essential to protect both users and organizations.
Comments
Want to join the conversation?
Loading comments...