Claude Is Getting Worse, According to Claude

Claude Is Getting Worse, According to Claude

The Register — Networks
The Register — NetworksApr 13, 2026

Companies Mentioned

Why It Matters

The outage and mounting quality concerns threaten Claude’s reputation as a reliable developer‑focused AI, potentially driving customers toward competing platforms and prompting investors to reassess Anthropic’s operational resilience.

Key Takeaways

  • Claude outage lasted 48 minutes, affecting Claude.ai and Claude Code
  • Users report rising quality complaints, 20+ issues in April so far
  • Anthropic throttles usage during peak hours to manage capacity
  • Automated issue closing may hide unresolved problems for developers
  • Independent benchmarks show Opus 4.6 scores unchanged on SWE‑Bench‑Pro

Pulse Analysis

Anthropic’s Claude has long been a go‑to large language model for developers, positioning itself as a safer alternative to rivals such as OpenAI’s GPT‑4. On April 13, 2026, the service suffered a 48‑minute outage from 15:31 to 16:19 UTC, triggering elevated error rates across both Claude.ai and Claude Code. While brief, the interruption amplified existing frustrations among paying customers who rely on Claude for code generation, debugging, and documentation. The incident also reminded enterprises that even premium AI platforms can experience downtime that disrupts production pipelines.

Beyond the outage, a growing chorus of developers is flagging deteriorating response quality. GitHub issue data shows more than 20 new Claude Code complaints in the first 13 days of April, outpacing March’s 18‑issue total and marking a three‑fold increase from the January‑February baseline. Anthropic has responded by throttling usage during peak periods, a move that some users say further degrades performance for paid accounts. Compounding the problem, the company’s automation script automatically closes inactive tickets, potentially obscuring unresolved bugs. Yet independent testing from Margin Lab indicates Opus 4.6 still holds steady on the SWE‑Bench‑Pro benchmark, suggesting the perceived decline may be situational rather than systemic.

The mixed signals around Claude underscore a broader challenge for AI‑powered developer tools: balancing rapid model iteration with reliability and transparency. As enterprises embed models deeper into critical codebases, any dip in accuracy or unexpected throttling can translate into costly rework or even production failures. Competitors will likely seize the moment to highlight their own uptime records and benchmark consistency, pressuring Anthropic to improve monitoring, communication, and open issue handling. For investors and customers, the episode serves as a reminder to diversify AI vendor strategies and demand clear service‑level commitments.

Claude is getting worse, according to Claude

Comments

Want to join the conversation?

Loading comments...