WEKA Claims Nvidia CMX Support Plays to Its Strengths

WEKA Claims Nvidia CMX Support Plays to Its Strengths

Blocks & Files
Blocks & FilesApr 9, 2026

Why It Matters

CMX levels the storage playing field while preserving WEKA’s performance edge, reinforcing its position in AI‑intensive workloads. This compatibility accelerates adoption of high‑throughput GPU clusters for large language model training and inference.

Key Takeaways

  • WEKA's NeuralMesh already supports Nvidia Grace servers
  • CMX support requires no software changes for WEKA
  • WEKA achieves 97% line‑rate on CX‑7 400 GbE
  • BF‑4 DPUs double cores, boosting NeuralMesh performance
  • CMX integration expands WEKA’s KV cache for LLM workloads

Pulse Analysis

Nvidia’s CMX KV‑cache extension, unveiled at GTC 2026, extends the memory hierarchy of GPU‑centric servers by allowing external RDMA‑connected SSDs to act as high‑speed cache tiers. The move promises to reduce latency for data‑intensive AI workloads, but it also democratizes the capability that previously required bespoke software stacks. Competitors that can tap into the CMX/STX reference architecture now have a pathway to match the performance once exclusive to vendors like WEKA and Hammerspace.

WEKA’s response hinges on its NeuralMesh architecture, a containerized micro‑service layer that runs on Nvidia’s Grace CPUs and BF‑4 DPUs. Because NeuralMesh was built to operate on Arm‑based Grace platforms from the start, WEKA can ingest CMX‑enabled SSDs without rewriting its codebase. The company reports 97% of line‑rate on CX‑7 400 GbE under mixed I/O workloads and anticipates similar results on the upcoming CX‑8/9 800 GbE Vera configurations. The BF‑4’s doubled core count further amplifies queue‑depth management, NVMe fabric metadata handling, and protocol services, delivering tangible throughput gains.

For enterprises deploying large language models—such as Claude Mythos with 10 trillion parameters or upcoming DeepSeek‑4 architectures—the expanded KV‑cache capacity enabled by CMX is critical. WEKA’s AXON filesystem can now scale its cache routing and scheduling to meet the memory demands of hybrid DiT, Mamba, and JEPA models, positioning the company as a preferred storage partner for frontier AI labs. By integrating CMX as a hardware offering in its next WEKApod release, WEKA signals a strategic commitment to stay ahead of the rapidly evolving AI infrastructure market.

WEKA claims Nvidia CMX support plays to its strengths

Comments

Want to join the conversation?

Loading comments...