Security Researchers Tricked Apple Intelligence Into Cursing at Users. It Could Have Been a Lot Worse

Security Researchers Tricked Apple Intelligence Into Cursing at Users. It Could Have Been a Lot Worse

The Register — Networks
The Register — NetworksApr 9, 2026

Companies Mentioned

Why It Matters

The vulnerability shows that on‑device LLMs are not immune to prompt‑injection, putting hundreds of millions of Apple users at risk of malicious manipulation. It underscores the urgency for stronger guardrails and rapid patch cycles in consumer AI products.

Key Takeaways

  • Apple Intelligence vulnerable to prompt injection on 200 million devices
  • Researchers achieved 76% success using Neural Exec and Unicode hack
  • iOS 26.4 and macOS 26.4 patches mitigate the demonstrated attack
  • Potential abuse includes creating contacts, phishing, or data manipulation

Pulse Analysis

Apple Intelligence represents a shift toward on‑device large language models, promising faster responses and enhanced privacy across the iPhone 15 Pro line, newer iPads, Macs with M1 chips and the Vision Pro headset. By embedding the model locally, Apple reduces reliance on cloud processing, but it also introduces a new attack surface that traditional server‑side defenses don’t cover. As developers tap the built‑in API for features like smart drafting in Mail or contextual suggestions in Safari, the security of the underlying model becomes a critical component of the broader Apple ecosystem.

The RSAC team’s proof‑of‑concept leveraged a technique called Neural Exec, which automates prompt generation using an optimization algorithm, and paired it with a Unicode right‑to‑left override to bypass Apple’s input and output filters. In 100 random prompts, 76 succeeded, causing the model to output profanity and demonstrating that the attack can be scaled. While the immediate payload was vulgar language, the researchers showed that the same vector could create contacts, alter data or trigger actions within third‑party apps, illustrating a pathway to more sophisticated phishing or data‑exfiltration attacks.

The incident highlights a broader industry challenge: prompt‑injection is a cat‑and‑mouse game that will persist as AI models become more capable. Apple’s rapid rollout of iOS 26.4 and macOS 26.4 patches shows the importance of agile update mechanisms, yet developers must also adopt defensive coding practices, such as sanitizing model inputs and monitoring anomalous outputs. For enterprises and consumers alike, the takeaway is clear—AI‑driven features bring convenience, but they also demand rigorous security hygiene and continuous vigilance as the technology evolves.

Security researchers tricked Apple Intelligence into cursing at users. It could have been a lot worse

Comments

Want to join the conversation?

Loading comments...