
By delivering near‑theoretical NVMe performance with standard NFS semantics, xiNAS lowers the cost and complexity of AI storage, enabling faster model training and more resilient data pipelines.
The explosion of generative AI and high‑performance computing has turned storage into a strategic bottleneck. Enterprises traditionally rely on proprietary flash arrays or custom NVMe‑over‑Fabric solutions that demand specialized clients and steep integration costs. xiNAS challenges that model by marrying a pure‑software RAID stack with the mature XFS file system and NFS over RDMA, offering a familiar POSIX interface while unlocking raw NVMe bandwidth for GPU‑driven workloads.
Xinnor’s validation on a Supermicro AS‑1116CS‑TN platform showcases the practical impact of this architecture. A single node with 12 × PCIe Gen5 NVMe SSDs delivered 74.5 GB/s sequential reads and 39.5 GB/s writes, translating to 990 k read IOPS at 265 µs latency and 587 k write IOPS at 430 µs latency. Adding a second node pushed sequential reads to 117 GB/s with near‑linear scaling, while read performance dipped only 8.5 % during an SSD failure and recovered quickly during rebuilds, underscoring the solution’s resilience for AI training pipelines that mix streaming and random‑access patterns.
For storage vendors and cloud providers, xiNAS presents a low‑overhead, standards‑based alternative that can be deployed on off‑the‑shelf hardware. Its reliance on NFS eliminates the need for custom client stacks, accelerating time‑to‑value and reducing operational complexity. As AI models grow in size and training cycles shorten, solutions that combine high throughput, fault tolerance, and open protocols are likely to gain traction, positioning Xinnor as a compelling challenger in the flash‑filer market.
Comments
Want to join the conversation?
Loading comments...