Is Anthropic 'Nerfing' Claude? Users Increasingly Report Performance Degradation as Leaders Push Back

Is Anthropic 'Nerfing' Claude? Users Increasingly Report Performance Degradation as Leaders Push Back

VentureBeat
VentureBeatApr 13, 2026

Why It Matters

If power users perceive Claude as weaker, Anthropic risks losing enterprise customers and market share to rivals like OpenAI, underscoring the importance of transparency in model performance and product changes.

Key Takeaways

  • Developers report Claude Code slower, more token‑heavy, less reliable
  • AMD senior director cites analysis of 6,800 sessions showing regression
  • Anthropic attributes changes to adaptive thinking defaults and medium effort level
  • BridgeBench benchmark drop from 83% to 68% sparked nerf accusations
  • Company denies secret throttling, cites cache TTL tweaks and session caps

Pulse Analysis

In early April 2026 a surge of developer complaints surfaced across GitHub, X and Reddit, accusing Anthropic of silently weakening Claude Opus 4.6 and Claude Code. The most detailed allegation came from Stella Laurenzo, senior director of AMD’s AI group, who mined over 6,800 Claude Code sessions and flagged a sharp drop in reasoning depth, more premature stops and higher token burn. The narrative quickly adopted the term “AI shrinkflation,” suggesting users were paying the same price for a less capable product, and it was amplified by viral posts measuring a 67 % performance decline.

Anthropic responded by pointing to product‑level adjustments rather than a hidden model downgrade. A UI change in February concealed the “thinking” pane but left the underlying inference unchanged, while the February 9 shift to adaptive thinking and the March 3 default to effort level 85 were intended to balance latency, cost and intelligence. In early March the company also shortened the prompt‑cache TTL from one hour to five minutes and tightened five‑hour session limits during peak Pacific‑time hours, measures that some users interpret as increased quota pressure. Anthropic maintains these tweaks are disclosed, experimental optimizations, not evidence of intentional nerfing.

The dispute highlights a growing trust gap that could affect Anthropic’s enterprise foothold as OpenAI rolls out a mid‑tier Codex‑focused subscription and expands its business‑centric offerings. When product settings alter token consumption or latency, power users may perceive a decline in model quality, eroding confidence even if core weights remain stable. Transparency around benchmark methodology, effort defaults and cache policies will be crucial for retaining high‑value developers and preventing churn, especially as the AI market tightens and customers demand measurable performance guarantees.

Is Anthropic 'nerfing' Claude? Users increasingly report performance degradation as leaders push back

Comments

Want to join the conversation?

Loading comments...