
Nvidia GTC 2026: KIOXIA Announces New SSD Model Optimized for AI GPU-Initiated Workloads
Key Takeaways
- •KIOXIA launches GP Series SSD for GPU‑direct AI workloads.
- •SSD provides higher IOPS, 512‑byte granularity, lower power per I/O.
- •Expands effective GPU memory beyond HBM via Nvidia Storage‑Next.
- •25.6 TB CM9 PCIe 5.0 SSD targets trillion‑parameter models.
- •Samples available to select customers by end‑2026, shipping Q3.
Summary
At Nvidia’s GTC 2026, KIOXIA unveiled its new Super High IOPS SSD, the GP Series, designed for direct GPU access in AI workloads. The drive leverages KIOXIA’s XL‑FLASH storage class memory to deliver higher IOPS, 512‑byte granularity, and lower power per operation, effectively extending GPU memory beyond HBM under Nvidia’s Storage‑Next initiative. Evaluation samples will be provided to select customers by the end of 2026, with the larger 25.6 TB CM9 PCIe 5.0 SSD slated for shipment in Q3 2026. The announcement positions KIOXIA as a key storage partner for scaling trillion‑parameter AI models.
Pulse Analysis
The rapid growth of AI models has exposed a fundamental bottleneck: GPU memory, traditionally supplied by high‑bandwidth memory (HBM), cannot keep pace with expanding parameter counts and context windows. Nvidia’s Storage‑Next initiative addresses this gap by allowing GPUs to offload data to ultra‑fast flash storage, effectively extending the memory hierarchy without sacrificing bandwidth. This architectural shift is critical as enterprises move from compute‑centric to data‑centric AI workloads, demanding storage that can be addressed directly by the GPU.
KIOXIA’s GP Series Super High IOPS SSD is engineered to meet those demands. Built on the company’s XL‑FLASH storage class memory, the drive offers unprecedented IOPS density and a fine‑grained 512‑byte access size, reducing latency and power per operation compared with conventional TLC SSDs. By presenting flash as a direct‑access extension to HBM, the GP Series enables larger datasets to reside closer to the compute engine, improving utilization rates and reducing the need for costly GPU upgrades. The accompanying CM9 PCIe 5.0 E3.S model adds 25.6 TB of capacity with 3 DWPD endurance, targeting inference clusters that run trillion‑parameter models.
The partnership signals a broader market trend where flash vendors are becoming integral to AI infrastructure. As AI workloads continue to scale, the ability to provision GPU‑accessible storage will influence data‑center design, cost structures, and performance benchmarks. KIOXIA’s early samples and planned Q3 shipments position it to capture a share of this emerging segment, while Nvidia’s push for storage‑centric AI architectures may spur further innovations in memory‑class storage technologies.
Comments
Want to join the conversation?