Google Plans Nearly Two Million New AI Chips as It Turns to Marvell for Custom Designs

Google Plans Nearly Two Million New AI Chips as It Turns to Marvell for Custom Designs

THE DECODER
THE DECODERApr 20, 2026

Why It Matters

Custom chips could lower Google’s per‑unit costs and accelerate AI workloads, strengthening its competitive edge in cloud AI services.

Key Takeaways

  • Google targets ~2 million custom MPUs for data‑center AI workloads.
  • New inference TPU designed specifically for running finished AI models.
  • Partnership with Marvell reduces reliance on Broadcom’s high‑fee TPU supply.
  • Custom silicon aims to cut costs and improve AI performance at scale.

Pulse Analysis

The race for purpose‑built AI silicon has intensified as cloud providers seek to outpace rivals on performance and price. Google’s decision to enlist Marvell—a firm that previously delivered the first inference chip for Groq—signals a strategic shift toward tighter integration of memory and compute. By co‑creating a dedicated memory processing unit, Google aims to offload data‑intensive tasks from its TPUs, reducing latency and improving throughput for large language models and other memory‑hungry workloads.

From a technical standpoint, the new MPU will act as a companion to Google’s existing TPUs, dynamically allocating tasks based on compute versus memory demands. The separate inference‑only TPU is optimized for running trained models at scale, a capability increasingly critical as enterprises migrate AI services to the cloud. Compared with Broadcom’s current TPU supply, which carries premium per‑unit fees, Marvell’s custom design promises a more cost‑effective silicon stack, potentially narrowing the margin gap that has favored competitors like Nvidia, which recently licensed Groq’s LPU technology for $20 billion.

For the market, Google’s move could reshape the data‑center chip ecosystem. Reducing dependence on Broadcom not only diversifies supply risk but also pressures other vendors to offer more flexible pricing and performance options. If the MPU and inference TPU deliver the projected efficiency gains, Google could lower operating expenses for its AI‑driven services, pass savings to customers, and reinforce its position as a leading cloud AI platform. The partnership also underscores Marvell’s growing role as a custom‑silicon partner for hyperscale players, hinting at further collaborations in the coming years.

Google plans nearly two million new AI chips as it turns to Marvell for custom designs

Comments

Want to join the conversation?

Loading comments...