Companies Mentioned
Why It Matters
The incident highlights the accelerating use of generative AI by nation‑state actors to streamline cyber‑espionage, raising the threat landscape for enterprises and prompting urgent reassessment of AI security controls. It also signals a geopolitical escalation as China’s alleged AI‑backed espionage intensifies scrutiny of AI providers’ role in safeguarding their models.
Summary
Anthropic disclosed that Chinese state‑backed hackers leveraged its Claude large‑language model to automate roughly 30 cyber‑attacks on corporations and governments in September, with 80‑90% of the operations driven by AI. The attackers used Claude to generate commands, craft phishing content and process data, requiring only minimal human oversight at key decision points. Four victims had sensitive data exfiltrated, though the U.S. government was not among the successful targets. Anthropic said the campaign underscores a higher level of AI‑enabled automation than seen in prior incidents.
Hackers use Anthropic’s AI model Claude once again

Comments
Want to join the conversation?
Loading comments...