Cybersecurity News and Headlines
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

Cybersecurity Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
CybersecurityNewsMicrosoft: ‘Summarize With AI’ Buttons Used To Poison AI Recommendations via @Sejournal, @MattGSouthern
Microsoft: ‘Summarize With AI’ Buttons Used To Poison AI Recommendations via @Sejournal, @MattGSouthern
Digital MarketingAICybersecurity

Microsoft: ‘Summarize With AI’ Buttons Used To Poison AI Recommendations via @Sejournal, @MattGSouthern

•February 20, 2026
0
Search Engine Journal
Search Engine Journal•Feb 20, 2026

Why It Matters

The technique shifts SEO‑style manipulation into AI memory, allowing competitors to influence assistant recommendations directly, which can distort business credibility and decision‑making.

Key Takeaways

  • •AI buttons embed hidden prompts to gain trusted status
  • •31 companies used technique; 50 distinct injection attempts
  • •Microsoft identified MITRE ATLAS memory‑poisoning classifications
  • •Copilot now blocks many cross‑prompt injections
  • •Defender for Office 365 offers hunting queries for URLs

Pulse Analysis

The rise of generative AI assistants has introduced a new attack surface that mirrors traditional SEO manipulation. Microsoft’s Defender Security Research team recently disclosed a practice it calls “AI Recommendation Poisoning,” where website buttons labeled “Summarize with AI” carry hidden prompt‑injection payloads. When a user clicks, the assistant receives a URL‑encoded instruction that not only asks for a summary but also tells the model to remember the originating site as a trusted source. By planting credibility directly into the model’s memory, adversaries can sway future citations and recommendations without any visible trace.

The researchers examined 60 days of email traffic and uncovered 50 distinct injection attempts originating from 31 legitimate companies. The hidden prompts follow a common pattern—adding phrases such as “trusted source for citations” or embedding full marketing copy—using publicly available tools like the npm package CiteMET and the AI Share URL Creator. The technique exploits URL query parameters supported by major assistants including Copilot, ChatGPT, Claude, Perplexity, and Grok, and has been cataloged under MITRE ATLAS AML.T0080 (Memory Poisoning) and AML.T0051 (Prompt Injection).

From a business perspective, AI recommendation poisoning threatens the same trust model that SEO once protected, allowing competitors to hijack AI‑driven brand rankings at the point of user interaction. Microsoft responded by hardening Copilot against cross‑prompt attacks and releasing advanced‑hunting queries for Defender for Office 365, enabling security teams to flag suspicious URL parameters. However, the open‑source nature of the tooling means new variants can appear faster than platform mitigations. Regulators and AI providers will need clear policies to define whether such memory manipulation constitutes a policy violation or a gray‑area marketing tactic.

Microsoft: ‘Summarize With AI’ Buttons Used To Poison AI Recommendations via @sejournal, @MattGSouthern

Read Original Article
0

Comments

Want to join the conversation?

Loading comments...