NVIDIA DGX Station Systems Available At Last GB300 and GB200 Workstations For Your Desktop

NVIDIA DGX Station Systems Available At Last GB300 and GB200 Workstations For Your Desktop

ServeTheHome
ServeTheHomeMar 20, 2026

Why It Matters

The DGX Station brings server‑grade AI performance to the desktop, enabling private, high‑throughput workloads without relying on costly cloud resources. Its high price and limited memory bandwidth, however, make it a niche solution for enterprises that prioritize data sovereignty and latency.

Key Takeaways

  • DGX Station ships with 252 GB HBM3e, 12% less than planned
  • Soldered 72‑core Grace CPU with B300 GPU, no SXM
  • Power limit 1.6 kW fits standard 120 V outlet
  • Pricing $80‑125 K, driven by DRAM/NAND shortages
  • OEM partners: ASUS, Dell, HP, MSI, Supermicro, Gigabyte

Pulse Analysis

The DGX Station marks NVIDIA’s first foray into a desktop‑sized platform that mirrors the compute density of its Grace Blackwell servers. By integrating a soldered 72‑core Grace CPU and a B300 Blackwell Ultra GPU on a single motherboard, the workstation delivers up to 252 GB of HBM3e memory and 496 GB of LPDDR5X system RAM, albeit with a 12% memory reduction from the original roadmap. Its 1.6 kW power envelope aligns with typical North American outlets, while the inclusion of dual 400 Gbps ConnectX‑8 Ethernet ports positions the box for high‑speed data movement in AI research labs.

From a business perspective, the DGX Station offers a compelling alternative to cloud‑based AI training for organizations that need to keep sensitive data on‑premise. The ability to run large language models locally—potentially generating over 10 million tokens per day—can reduce recurring cloud spend and mitigate latency concerns. However, the steep price tag, ranging from $80 K to $125 K, reflects both the premium hardware and ongoing DRAM/NAND supply constraints, limiting adoption to well‑funded enterprises and specialized research teams.

The ecosystem surrounding the DGX Station is bolstered by a roster of established OEMs—ASUS, Dell, HP, MSI, Gigabyte and Supermicro—each delivering tower configurations that accommodate the oversized motherboard and extensive I/O. Meanwhile, Supermicro’s GB200‑based desktop targets HPC developers seeking higher FP64 performance at a lower memory capacity. As AI workloads continue to scale, NVIDIA’s strategy of offering both high‑end, server‑grade workstations and more modest developer kits positions it to capture a broad spectrum of the on‑prem AI market, provided supply chain pressures ease and pricing becomes more competitive.

NVIDIA DGX Station Systems Available At Last GB300 and GB200 Workstations For Your Desktop

Comments

Want to join the conversation?

Loading comments...