
Is Nvidia Opening up Its NVLink Doors Even Further? New Partnership with AMD Will See Greater Integration Across Many Kinds of Chips
Why It Matters
By extending NVLink to Arm CPUs, Nvidia broadens its AI hardware ecosystem, giving cloud providers and AI developers more flexibility in system design and potentially accelerating adoption of Arm‑based servers while reducing reliance on competing interconnects.
Summary
Nvidia announced that its NVLink Fusion interconnect will now be supported on Arm‑based Neoverse CPUs, enabling direct, high‑bandwidth communication between custom Arm processors and Nvidia GPUs. The move expands NVLink beyond Nvidia’s own CPUs and Intel/AMD‑based servers, allowing hyperscalers such as Microsoft, Amazon and Google to build AI workstations and servers with Arm CPUs paired with multiple Nvidia GPUs. Arm licensees can integrate NVLink IP into their SoCs, offering a more efficient alternative to PCIe and opening the door for sovereign AI projects and OpenAI‑backed initiatives that prefer Arm control‑plane chips. The announcement was made at Supercomputing ’25 and signals broader ecosystem adoption of Nvidia’s GPU acceleration across diverse CPU architectures.
Is Nvidia opening up its NVLink doors even further? New partnership with AMD will see greater integration across many kinds of chips
Comments
Want to join the conversation?
Loading comments...