
The GX10 gives developers on‑premise AI power at a fraction of data‑centre costs, but its limited upgrade path may constrain long‑term adaptability as AI architectures evolve.
The rapid adoption of generative AI has driven enterprises to reassess their compute strategy. While cloud providers offer virtually unlimited resources, the recurring expense and data‑privacy concerns push many organizations toward on‑premise solutions. Asus’s Ascent GX10 arrives as a purpose‑built workstation that packs data‑centre‑class performance into a NUC‑sized chassis, targeting developers who need deterministic latency and a predictable cost structure for local model training. The device also runs Nvidia DGX OS, an Ubuntu‑based distribution optimized for AI workloads.
Under the hood, the GX10 combines a 20‑core ARM v9.2‑A processor with Nvidia’s Blackwell GB10 accelerator, delivering up to one petaFLOP of FP4 compute while drawing roughly 140 W. The unified 128 GB LPDDR5x memory eliminates CPU‑GPU bottlenecks, and the 200 Gbps ConnectX‑7 SmartNIC lets two units be linked via InfiniBand, effectively doubling the available compute and memory pool. Its 10 GbE LAN and Wi‑Fi 7 connectivity further simplify integration into existing lab networks. With a 48 V 5 A power supply and a compact 150 mm footprint, the system balances raw AI throughput against a modest thermal envelope.
At a starting price of $3,099 for the 1 TB model, the GX10 undercuts comparable AI appliances such as Nvidia’s DGX Spark and Gigabyte’s AI TOP ATOM, making it one of the most affordable desktop‑class AI supercomputers in Europe. However, the lack of internal expandability, a single M.2 slot, and the absence of USB‑A ports may limit its appeal to broader workstation markets. Enterprises can also stack two units via ConnectX‑7, achieving memory coherence for models exceeding 400 billion parameters. As long as large‑scale model training remains dominated by dense transformer architectures, the GX10 offers a compelling on‑premise alternative; a shift toward fundamentally new AI paradigms could render its specialized hardware obsolete.
Comments
Want to join the conversation?
Loading comments...