
Why Was Claude Mythos Really Delayed? OpenAI Points to Anthropic's Massive Compute Constraints, Says Report
Why It Matters
Compute capacity is emerging as a strategic moat that can accelerate model releases and capture market share, reshaping the competitive landscape for AI startups and their investors.
Key Takeaways
- •OpenAI reports 1.9 GW compute in 2025, targeting 30 GW by 2030
- •Anthropic lagged with 1.4 GW in 2025, aiming for 7‑8 GW next year
- •Claude Mythos delayed due to Anthropic’s compute constraints
- •OpenAI frames compute as a product‑level constraint
- •Infrastructure scale now a decisive competitive advantage
Pulse Analysis
The race for raw compute power has become the defining battleground for generative‑AI firms. While early‑stage startups once relied on cloud credits, the most advanced models now demand dedicated data‑center‑scale hardware. OpenAI’s recent investor memo underscores this shift, positioning its rapid expansion to 1.9 GW in 2025 as a strategic moat that keeps it ahead of rivals and ensures uninterrupted service for ChatGPT and emerging products. Analysts see this trend echoing broader industry moves, where firms invest billions in custom silicon and high‑density clusters to meet exploding demand.
OpenAI’s roadmap to roughly 30 GW by 2030 signals a commitment to massive scale, potentially lowering per‑inference costs and enabling more ambitious multimodal offerings. Such growth, however, comes with hefty capital expenditures and energy considerations, prompting investors to scrutinize return‑on‑investment metrics. By contrast, Anthropic’s reported 1.4 GW capacity in 2025 places it at a clear disadvantage, limiting its ability to roll out flagship models like Claude Mythos broadly. The compute gap translates into slower iteration cycles, higher latency for users, and a narrower partner ecosystem, all of which can erode market momentum.
Anthropic’s decision to restrict Mythos to select partners reflects both a pragmatic response to its hardware ceiling and a cautious stance on safety, given the model’s elevated cybersecurity risk. Yet the memo suggests the delay is less about prudence and more about resource scarcity, a narrative that could influence future funding rounds and partnership negotiations. As compute becomes a product constraint, firms that secure early, large‑scale infrastructure will likely dictate pricing, set performance benchmarks, and capture the lion’s share of AI‑driven revenue streams. Investors and industry watchers should therefore monitor capacity expansions as closely as model breakthroughs themselves.
Why was Claude Mythos really delayed? OpenAI points to Anthropic's massive compute constraints, says report
Comments
Want to join the conversation?
Loading comments...