OFC 2026: Marvell Launches Next-Generation CXL Switch, Enabling Memory Pooling to Break Through the AI “Memory Wall”

OFC 2026: Marvell Launches Next-Generation CXL Switch, Enabling Memory Pooling to Break Through the AI “Memory Wall”

StorageNewsletter
StorageNewsletterMar 26, 2026

Key Takeaways

  • 260‑lane CXL 3.0 switch delivers up to 4 TB/s bandwidth.
  • Enables rack‑wide memory pooling across CPUs, GPUs, XPUs.
  • Improves AI inference throughput and reduces DRAM cost pressures.
  • Completes Marvell’s end‑to‑end CXL portfolio after XConn acquisition.
  • Sampling starts Q3 2026; production already for CXL 2.0 version.

Summary

Marvell announced the Structera S 30260, a 260‑lane CXL 3.0 switch that enables rack‑level memory pooling for AI workloads. The device offers up to 4 TB/s aggregate bandwidth and works with Marvell’s Structera A accelerators, Structera X expansion controllers, and Alaska P retimers to provide disaggregated memory across CPUs, GPUs and XPUs. By breaking the AI “memory wall,” it promises higher memory utilization, lower latency and reduced total‑cost‑of‑ownership. Sampling begins in Q3 2026, following production of the CXL 2.0 version.

Pulse Analysis

The rapid expansion of large‑language models has exposed a critical "memory wall" in modern data centers. As context windows widen and key‑value caches balloon, traditional DRAM scaling becomes both financially and logistically untenable. CXL (Compute Express Link) emerged as a standards‑based solution for memory disaggregation, allowing compute nodes to tap remote memory pools with low latency. Industry analysts warn that without such architectures, AI training and inference costs could double, prompting vendors to accelerate CXL‑centric roadmaps.

Marvell’s Structera S 30260 pushes the envelope with a 260‑lane, CXL 3.0‑compliant switch capable of 4 TB/s bandwidth. Integrated with the company’s existing Structera A near‑memory accelerators, Structera X expansion controllers, and Alaska P PCIe/CXL retimers, the solution creates a unified fabric that dynamically allocates memory across CPUs, GPUs, XPUs, and storage. This composable approach reduces multi‑hop data movement, delivering sub‑microsecond access times and higher GPU utilization, while freeing operators from costly HBM stacking or server‑level DRAM upgrades.

For hyperscalers and enterprise clouds, the timing is strategic. With DRAM supply constraints driving price spikes, a CXL‑based pooling architecture offers a cost‑effective path to scale AI workloads. Marvell’s end‑to‑end portfolio, bolstered by the XConn acquisition, positions it against rivals like Intel and Nvidia that are also courting the CXL market. Early adopters can expect lower total‑cost‑of‑ownership and greater design flexibility, though success will hinge on ecosystem maturity and software orchestration tools. As AI models continue to grow, memory‑centric innovations like the Structera S will likely become foundational infrastructure components.

OFC 2026: Marvell Launches Next-generation CXL Switch, Enabling Memory Pooling to Break Through the AI “Memory Wall”

Comments

Want to join the conversation?