ICC Launches Aquarius R-117A Immersion-Native 1U Server with 6 NVIDIA H200 GPUs
Key Takeaways
- •Six NVIDIA H200 GPUs fit in 1U immersion chassis.
- •846 GB total GPU memory enables single-node large AI models.
- •AMD EPYC Turin 192 cores supports GPU throughput without NUMA.
- •Immersion-native design achieves PUE ~1.03, far better efficiency.
- •Dual 5200 W PSUs power >4.2 kW GPU load.
Summary
ICC unveiled the Aquarius R-117A, an immersion‑native 1U server that packs six NVIDIA H200 SXM GPUs and a single‑socket AMD EPYC Turin processor with up to 192 cores and 3 TB DDR5 RAM. The system delivers 846 GB of combined GPU memory and more than 4.2 kW of GPU power in a single rack unit, using dielectric oil cooling to achieve a PUE near 1.03. By designing the chassis specifically for oil immersion, ICC eliminates the thermal limits of air‑cooled servers and offers three‑to‑four‑fold higher compute density. The platform targets AI, HPC, finance, and oil‑and‑gas workloads that demand massive memory bandwidth and low‑latency GPU interconnect via NVLink.
Pulse Analysis
Oil immersion cooling has moved from niche experiments to a mainstream strategy for data centers seeking ultra‑low Power Usage Effectiveness. Traditional servers retro‑fitted for immersion retain airflow‑centric layouts, limiting heat transfer and forcing conservative power budgets. ICC’s Aquarius R-117A flips that paradigm by engineering every component—PCB stack‑up, power delivery, and chassis geometry—for direct contact with dielectric fluid. The result is a PUE around 1.03, dramatically reducing cooling overhead and enabling unprecedented compute density within a 1U footprint.
At the heart of the Aquarius R-117A are six NVIDIA H200 SXM GPUs, each equipped with 141 GB of HBM3e memory and 4.8 TB/s bandwidth, totaling 846 GB of GPU memory. NVLink interconnect stitches the GPUs into a unified memory fabric, eliminating PCIe bottlenecks and delivering sub‑microsecond data exchange for massive AI models and high‑fidelity simulations. The single‑socket AMD EPYC Turin processor, built on TSMC’s 3 nm node, offers up to 192 Zen 5 cores and 3 TB of DDR5 ECC RAM, ensuring the CPU can feed the GPUs without NUMA‑induced latency. Power delivery is handled by dual 5.2 kW PSUs, comfortably supporting the 4.2 kW GPU thermal envelope.
For enterprises, the Aquarius R-117A translates into tangible cost and performance advantages. Large language model training, seismic processing, or real‑time quantitative finance can now run on a single rack unit, slashing rack space, power distribution complexity, and network fabric requirements. The reduced physical footprint and superior energy efficiency lower total cost of ownership while delivering the low‑latency, high‑bandwidth environment demanded by next‑generation AI and HPC workloads. As more organizations prioritize sustainability and speed, immersion‑native servers like ICC’s are poised to become a cornerstone of high‑performance infrastructure.
Comments
Want to join the conversation?