AI News and Headlines
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

AI Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
AINewsAnthropic Says DeepSeek, Moonshot, and MiniMax Used 24,000 Fake Accounts to Rip Off Claude
Anthropic Says DeepSeek, Moonshot, and MiniMax Used 24,000 Fake Accounts to Rip Off Claude
AIDefenseGlobal EconomyLegal

Anthropic Says DeepSeek, Moonshot, and MiniMax Used 24,000 Fake Accounts to Rip Off Claude

•February 23, 2026
0
VentureBeat
VentureBeat•Feb 23, 2026

Why It Matters

The breach threatens intellectual‑property value and national‑security safeguards, pressuring policymakers to tighten AI export controls and industry standards.

Key Takeaways

  • •24,000 fake accounts generated 16 million Claude exchanges
  • •DeepSeek, Moonshot, MiniMax targeted reasoning, coding, tool use
  • •Distillation attacks bypassed Anthropic’s China access ban
  • •Proxy networks employed hydra‑cluster architecture for resilience
  • •Anthropic urges industry‑wide defenses and policy coordination

Pulse Analysis

Anthropic’s latest disclosure shines a harsh light on the growing practice of AI distillation, where competitors harvest a model’s outputs to train smaller, cheaper replicas. By tracing more than 16 million API calls to roughly 24 000 fabricated accounts, the company identified three Chinese labs—DeepSeek, Moonshot AI and MiniMax—as the primary actors. The attacks focused on Claude’s most differentiated capabilities, such as chain‑of‑thought reasoning, tool use and code generation, allowing the labs to amass training data at a fraction of the original development cost.

The episode arrives amid a heated U.S. debate over AI chip export controls, a policy arena where Anthropic’s CEO Dario Amodei has been a vocal advocate. By framing distillation as a national‑security threat, the company argues that illicitly copied models lack the safety guardrails embedded in American systems, potentially feeding authoritarian surveillance or cyber‑offensive tools. This narrative links technical theft directly to the broader geopolitical contest for AI supremacy, suggesting that even strict hardware bans can be circumvented when intellectual‑property extraction is weaponized at scale.

Anthropic’s response combines immediate technical safeguards with a call for collective action. New classifiers and behavioral fingerprinting aim to flag chain‑of‑thought prompting, while tighter verification for educational and startup accounts reduces entry points for fraudulent users. The firm is also sharing indicators with cloud providers and rival labs, recognizing that API security has become as strategic as model weights themselves. If policymakers translate these findings into tighter export regimes or sanctions on proxy services, the industry may see a shift toward more robust authentication and monitoring standards across all frontier AI offerings.

Anthropic says DeepSeek, Moonshot, and MiniMax used 24,000 fake accounts to rip off Claude

Read Original Article
0

Comments

Want to join the conversation?

Loading comments...