
Critics Scoff After Microsoft Warns AI Feature Can Infect Machines and Pilfer Data
Companies Mentioned
Why It Matters
The rollout highlights the tension between rapid AI integration and cybersecurity, exposing enterprises to new attack vectors that could compromise sensitive data and undermine trust in AI‑driven productivity tools.
Summary
Microsoft announced Copilot Actions, an experimental agentic feature for Windows that can automate tasks such as file organization, meeting scheduling, and email composition. The company warned that the feature is vulnerable to known large‑language‑model flaws, including hallucinations and prompt‑injection attacks that could lead to data exfiltration or malware installation, and advised only experienced users enable it. Security experts criticized the warning as insufficient, likening it to longstanding macro warnings and questioning Microsoft’s ability to let admins control or detect the feature, especially as beta features often become default. Microsoft said IT admins can toggle the feature via Intune or other MDM tools, but critics argue the safeguards rely on user prompts that are easily ignored.
Critics scoff after Microsoft warns AI feature can infect machines and pilfer data
Comments
Want to join the conversation?
Loading comments...