Microsoft, Salesforce Patch AI Agent Data Leak Flaws

Microsoft, Salesforce Patch AI Agent Data Leak Flaws

Dark Reading
Dark ReadingApr 15, 2026

Why It Matters

These vulnerabilities show AI agents can become vectors for data exfiltration, threatening enterprise confidentiality and compliance. Remediation requires not only code fixes but also robust architectural safeguards, making prompt‑injection a critical focus for organizations deploying LLM‑driven tools.

Key Takeaways

  • Capsule Security discovered prompt‑injection flaws in Salesforce Agentforce and Microsoft Copilot
  • Salesforce’s “PipeLeak” let attackers exfiltrate lead data via public CRM forms
  • Microsoft’s “ShareLeak” (CVE‑2026‑21520) exposed SharePoint data through crafted inputs
  • Both vendors patched bugs but rely on human‑in‑the‑loop settings for mitigation
  • Prompt injection remains an unsolved risk for AI agents handling sensitive data

Pulse Analysis

The rapid adoption of large‑language‑model (LLM) agents across enterprise workflows has introduced a new attack surface that traditional security tools were never designed to monitor. Prompt‑injection attacks exploit the way LLMs treat user‑supplied text as executable instructions, allowing threat actors to steer an agent’s behavior without needing code‑level vulnerabilities. As organizations embed agents in CRM, document management, and code‑generation pipelines, the line between trusted data and untrusted input blurs, making data exfiltration a realistic risk even for well‑funded vendors.

The recent Capsule Security report highlighted two concrete examples: Salesforce’s Agentforce flaw, dubbed “PipeLeak,” let an attacker submit a malicious lead‑form entry that instructed the agent to email every stored lead, while Microsoft’s Copilot vulnerability (CVE‑2026‑21520, “ShareLeak”) used a crafted SharePoint form to retrieve confidential documents and forward them externally. Both vendors issued patches that block the specific prompt patterns, yet Salesforce’s fix also requires enabling a human‑in‑the‑loop (HITL) setting to approve outbound emails. These measures mitigate the immediate threat but do not remove the underlying architectural weakness of treating input as trusted instructions.

Enterprises must treat prompt injection as a core security requirement rather than an afterthought. Best practices include sanitizing all external inputs, enforcing strict tool‑use policies, isolating LLM instruction contexts, and logging every agent‑initiated communication for audit. Moreover, adopting layered defenses—such as instruction‑boundary techniques, real‑time anomaly detection, and mandatory human review for actions that move data outside the organization—can reduce the likelihood of a lethal trifecta scenario. As AI agents become more autonomous, regulators and industry standards are likely to codify these controls, making proactive hardening essential for compliance and brand trust.

Microsoft, Salesforce Patch AI Agent Data Leak Flaws

Comments

Want to join the conversation?

Loading comments...