Google Is in Talks with Marvell About Two New AI Chips—Putting Pressure on Broadcom

Google Is in Talks with Marvell About Two New AI Chips—Putting Pressure on Broadcom

Igor’sLAB
Igor’sLABApr 22, 2026

Key Takeaways

  • Google explores two new AI chips with Marvell, adding design options
  • Marvell's memory processing unit would complement Google's existing TPU ecosystem
  • Parallel talks give Google leverage over Broadcom's pricing and supply risk
  • Industry sees shift toward modular AI hardware rather than single‑vendor GPUs

Pulse Analysis

Google’s AI ambitions have long hinged on custom silicon, most notably its Tensor Processing Units (TPUs) that power services from Search to Gemini. In April 2026 the cloud giant sealed a multi‑year deal with Broadcom to supply future‑generation AI accelerators through 2031, cementing a deep partnership. Yet a new Reuters report indicates Google is simultaneously courting Marvell for two additional chips—a dedicated inference TPU and a Memory Processing Unit (MPU) designed to boost high‑bandwidth memory connectivity. This dual‑track approach reflects a strategic shift from single‑vendor reliance to a diversified supply chain that can better manage cost, capacity, and technological risk.

Marvell’s appeal lies in its modular AI portfolio, which includes HBM‑centric XPU architectures and advanced networking IP. By adding an MPU, Google could offload memory‑intensive workloads from its TPUs, improving efficiency in data‑center racks where scale drives operating expenses. The parallel negotiations also give Google leverage in price talks with Broadcom, potentially securing more favorable terms or faster technology roadmaps. For a hyperscaler that processes exabytes of data daily, having multiple silicon partners mitigates the impact of any single supplier’s production hiccups, a concern amplified by recent global chip shortages.

The broader market is watching closely. If Google formalizes a Marvell contract, it could accelerate the fragmentation of AI hardware into specialized components—memory, compute, and networking—rather than relying on monolithic GPUs. Competitors such as Nvidia may feel pressure to open their architectures or partner with foundries to retain relevance. Meanwhile, semiconductor firms like Marvell stand to gain credibility as viable alternatives for custom AI silicon, potentially reshaping vendor dynamics in the next generation of data‑center infrastructure. The outcome will likely influence pricing, innovation speed, and the overall architecture of AI workloads for years to come.

Google is in talks with Marvell about two new AI chips—putting pressure on Broadcom

Comments

Want to join the conversation?