Crypto News and Headlines
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

Crypto Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
CryptoNewsAnthropic Research Shows AI Agents Are Closing In on Real DeFi Attack Capability
Anthropic Research Shows AI Agents Are Closing In on Real DeFi Attack Capability
Crypto

Anthropic Research Shows AI Agents Are Closing In on Real DeFi Attack Capability

•December 2, 2025
0
CoinDesk
CoinDesk•Dec 2, 2025

Companies Mentioned

Anthropic

Anthropic

Why It Matters

AI‑driven autonomous exploitation threatens billions locked in DeFi and could accelerate attacks across the broader software supply chain, forcing immediate adoption of AI‑assisted defenses.

Key Takeaways

  • •GPT‑5 and Sonnet 4.5 generated $4.6 M simulated exploits
  • •Models identified two zero‑day BNB flaws worth $3,694
  • •Running AI agents cost $1.22 per contract scan
  • •Automated exploits could shrink DeFi attack windows
  • •Same AI reasoning may target broader software ecosystems

Pulse Analysis

The convergence of large language models and automated reasoning has reshaped cyber‑security research, turning tools once limited to code generation into potent threat actors. Recent advances allow AI agents to parse contract bytecode, identify logical errors, and synthesize transaction sequences without human input. This shift mirrors earlier trends in malware automation, but the public and immutable nature of blockchain assets amplifies the potential damage, prompting regulators and industry groups to reassess risk models.

Anthropic's study provides the first quantitative benchmark of AI‑enabled DeFi attacks. By running GPT‑5, Claude Opus 4.5 and Sonnet 4.5 across 405 historically exploited contracts, the team recorded $4.6 million in simulated losses, while a targeted scan of 2,849 fresh BNB Chain contracts uncovered two zero‑day vulnerabilities worth $3,694. The cost structure—approximately $1.22 per contract evaluation—demonstrates that even modestly funded adversaries could deploy continuous scanning bots, compressing the window between deployment and exploitation to minutes. For protocol designers, this underscores the urgency of integrating AI‑driven static analysis into the development pipeline.

Beyond decentralized finance, the underlying reasoning patterns—state manipulation, privilege escalation, and transaction ordering—are transferable to traditional software stacks, cloud services, and supply‑chain components. As model inference becomes cheaper and open‑source toolchains mature, attackers are likely to broaden their target set, eroding the perceived safety of closed‑source environments. Defenders must therefore adopt a layered approach: employing AI‑assisted auditors, real‑time monitoring of on‑chain activity, and incentive mechanisms for rapid vulnerability disclosure. Proactive investment in these capabilities will be essential to stay ahead of an emerging class of autonomous exploit agents.

Anthropic Research Shows AI Agents Are Closing In on Real DeFi Attack Capability

Read Original Article
0

Comments

Want to join the conversation?

Loading comments...