AI News and Headlines
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

AI Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
AINewsAnthropic Accuses Deepseek, Moonshot, and MiniMax of Stealing Claude's AI Data Through 16 Million Queries
Anthropic Accuses Deepseek, Moonshot, and MiniMax of Stealing Claude's AI Data Through 16 Million Queries
AICybersecurity

Anthropic Accuses Deepseek, Moonshot, and MiniMax of Stealing Claude's AI Data Through 16 Million Queries

•February 23, 2026
0
THE DECODER
THE DECODER•Feb 23, 2026

Why It Matters

The breach reveals a new scale of AI model piracy that could erode competitive advantage and raise national security concerns, prompting urgent policy and technical safeguards.

Key Takeaways

  • •Anthropic detected 16M queries from three Chinese labs.
  • •Labs used fake accounts and proxies to bypass restrictions.
  • •Deepseek focused on reasoning chains and censorship‑compliant answers.
  • •MiniMax sent 13M requests, switched to new Claude model quickly.
  • •Anthropic urges industry and regulators to coordinate response.

Pulse Analysis

Distillation attacks, where a weaker model learns from a stronger one, have moved from theoretical risk to concrete threat. Anthropic’s disclosure shows that malicious actors can orchestrate massive query campaigns using fabricated identities and proxy networks, effectively reverse‑engineering proprietary AI capabilities. This development underscores the vulnerability of large language models (LLMs) when exposed via public APIs, especially when access controls are lax or can be circumvented through offshore services.

The three Chinese labs each pursued distinct objectives within the broader theft operation. Deepseek concentrated on extracting Claude’s chain‑of‑thought reasoning and obtaining censorship‑compliant responses to politically sensitive prompts, aiming to replicate nuanced decision‑making. Moonshot AI cast a wider net, probing agent‑based reasoning, tool usage, and even computer‑vision outputs to reconstruct Claude’s internal thought processes. MiniMax launched the most aggressive campaign, issuing over 13 million queries and swiftly pivoting to newer Claude iterations within a day, indicating a real‑time adaptation capability that could accelerate model cloning.

For the AI ecosystem, this incident signals a pressing need for stronger defensive measures. Companies may adopt query‑rate limiting, watermarking of model outputs, and anomaly detection to flag suspicious usage patterns. Policymakers are likely to consider tighter regulations on cross‑border AI data flows and mandatory reporting of large‑scale extraction attempts. Coordinated industry standards, combined with governmental oversight, could help safeguard intellectual property while preserving the openness that fuels AI innovation.

Anthropic accuses Deepseek, Moonshot, and MiniMax of stealing Claude's AI data through 16 million queries

Read Original Article
0

Comments

Want to join the conversation?

Loading comments...