Second-Order Prompt Injection Can Turn AI Into a Malicious Insider

Second-Order Prompt Injection Can Turn AI Into a Malicious Insider

TechRadar
TechRadarNov 21, 2025

Companies Mentioned

Why It Matters

The flaw could turn internal AI assistants into covert data‑theft tools, exposing enterprises to large‑scale information leaks and privilege escalation, highlighting the need for stricter governance of generative AI workflows.

Summary

Security firm AppOmni has identified a vulnerability in ServiceNow’s Now Assist AI platform called “second‑order prompt injection,” where a low‑privileged AI agent can manipulate a higher‑privileged agent to exfiltrate sensitive data or elevate privileges. The attack exploits default configurations that allow autonomous agent‑to‑agent collaboration, enabling a malicious primary agent to issue covert tasks that the secondary agent executes without human oversight. ServiceNow acknowledges the behavior as intended and has only updated its documentation, while AppOmni recommends disabling autonomous overrides, enabling supervised execution for privileged agents, segmenting duties, and monitoring for anomalous AI activity.

Second-order prompt injection can turn AI into a malicious insider

Comments

Want to join the conversation?

Loading comments...