The Epoch Brief - March 2026

The Epoch Brief - March 2026

Epoch AI
Epoch AIApr 2, 2026

Key Takeaways

  • AI chip bandwidth grew 4.1× annually, 70 M TB/s total
  • CoWoS and HBM bottlenecks dominate 2025 chip supply
  • Final training runs under 30% of R&D compute spend
  • Job ads reveal rapid rise in AI go‑to‑market hires
  • GPT‑5.4 set new FrontierMath record, 50% Tier‑1‑3 score

Summary

Epoch AI’s March 2026 brief highlights three new Data Insights, including a 4.1× annual rise in AI chip memory bandwidth now at 70 million TB/s, and reveals that advanced packaging and high‑bandwidth memory, not logic dies, constrained chip production in 2025. A Gradient Update shows final training runs consume less than 30% of R&D compute at leading labs, while job‑posting analysis signals a surge in go‑to‑market roles. The brief also celebrates GPT‑5.4’s record performance on FrontierMath benchmarks and notes hiring for a contract data scientist and a special projects associate.

Pulse Analysis

The AI hardware landscape is entering a hyper‑growth phase, as evidenced by Epoch’s latest Data Insight showing cumulative AI chip memory bandwidth soaring to roughly 70 million terabytes per second—a figure 300,000 times greater than global internet traffic. This surge is driven primarily by high‑bandwidth memory (HBM) and advanced packaging technologies such as Chip‑on‑Wafer‑on‑Substrate (CoWoS). However, the same insight flags a supply‑chain choke point: the four biggest AI chip designers captured about 90% of global CoWoS and HBM capacity in 2025, while consuming only a modest share of advanced logic dies, highlighting a strategic vulnerability for manufacturers and cloud providers alike.

Parallel research from Epoch’s Gradient Updates reveals that the final training phase of AI models accounts for less than 30% of total R&D compute spending at firms like MiniMax and Z.ai, echoing earlier OpenAI findings. This suggests that a substantial portion of compute resources is allocated to exploratory research, model iteration, and infrastructure development rather than just the last‑mile training. Moreover, an analysis of frontier AI job postings uncovers a pronounced shift toward go‑to‑market roles, indicating that companies are preparing to commercialize advanced models faster, which could accelerate market saturation and intensify competition for talent.

On the performance front, GPT‑5.4’s record on the FrontierMath benchmark—scoring 50% on Tier 1‑3 problems and achieving a historic solution on a Tier 4 challenge—demonstrates AI’s expanding ability to tackle complex, previously unsolved mathematical problems. Such breakthroughs not only validate the efficacy of large‑scale language models in scientific discovery but also raise questions about the future of human‑led research. As AI continues to close the gap on frontier mathematics, stakeholders across academia, industry, and policy must consider the implications for intellectual property, research funding, and the broader innovation ecosystem.

The Epoch Brief - March 2026

Comments

Want to join the conversation?