AMD, NVIDIA, OpenAI & Others Form An Optical Scale-Up Consortium
Key Takeaways
- •Consortium includes AMD, Broadcom, Meta, Microsoft, NVIDIA, OpenAI.
- •Targets optical scale‑up for AI beyond copper limits.
- •OCI GEN2 offers 400 Gbps per direction, 800 Gbps per fiber.
- •Roadmap aims 3.2 Tbps per fiber via WDM scaling.
- •Supports pluggable, on‑board, co‑packaged optics form factors.
Summary
AMD, Broadcom, Meta, Microsoft, NVIDIA and OpenAI have launched the Optical Compute Interconnect (OCI) Multi‑Source Agreement consortium. The group aims to create an open, multi‑vendor ecosystem for optical scale‑up interconnects that replace copper in AI data‑center clusters. OCI’s specification combines NRZ modulation with wavelength‑division multiplexing and a silicon‑centric architecture to boost bandwidth density while meeting power and cost targets. The roadmap promises up to 800 Gbps per fiber now, with plans to exceed 3 Tbps per fiber in future generations.
Pulse Analysis
The rapid expansion of large language models has exposed the bandwidth and latency constraints of traditional copper back‑plane connections. Data‑center architects are now forced to consider alternatives that can sustain petabit‑scale traffic without prohibitive power draw. Optical interconnects, long used in telecom, offer the reach and density required for AI clusters, but their adoption has been hampered by fragmented standards and proprietary solutions. By uniting leading silicon and AI players, the OCI MSA seeks to eliminate these barriers and provide a clear migration path.
OCI’s technical blueprint leverages non‑return‑to‑zero (NRZ) signaling combined with wavelength‑division multiplexing (WDM) to deliver 200 Gbps per lane in its first generation and 400 Gbps per direction in GEN2, effectively doubling per‑fiber capacity to 800 Gbps. The specification’s silicon‑centric approach encourages tight integration of photonics with compute dies, reducing the need for external transceivers and cutting both latency and power consumption. Moreover, the roadmap’s scalability to 3.2 Tbps per fiber through additional wavelengths positions OCI as a future‑proof solution for increasingly dense GPU and accelerator racks.
From a market perspective, an open, multi‑vendor standard lowers entry barriers for smaller OEMs and accelerates innovation across the AI hardware supply chain. Companies that adopt OCI can expect faster time‑to‑market for high‑performance clusters, while end‑users benefit from reduced total cost of ownership. The consortium’s composition—excluding Intel but featuring NVIDIA and OpenAI—signals a strategic alignment among AI‑centric firms seeking to shape the next generation of data‑center architecture. As optical solutions mature, they are likely to become the default interconnect for AI workloads, reshaping competitive dynamics and driving new business models around photonic integration.
Comments
Want to join the conversation?