AI News and Headlines
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

AI Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
AINewsXinnor's Alternative Software RAID Filer for AI
Xinnor's Alternative Software RAID Filer for AI
Big DataCIO PulseAIHardware

Xinnor's Alternative Software RAID Filer for AI

•February 12, 2026
0
Blocks & Files
Blocks & Files•Feb 12, 2026

Why It Matters

By delivering near‑theoretical NVMe performance with standard NFS semantics, xiNAS lowers the cost and complexity of AI storage, enabling faster model training and more resilient data pipelines.

Key Takeaways

  • •xiNAS delivers 74.5 GB/s read, 39.5 GB/s write on single node
  • •Linear scaling reaches 117 GB/s read with two nodes
  • •NFS over RDMA removes client‑side bottlenecks
  • •Failure reduces reads only 8.5%, writes stay stable
  • •Combines XFS, xiRAID, and standard NFS semantics

Pulse Analysis

The explosion of generative AI and high‑performance computing has turned storage into a strategic bottleneck. Enterprises traditionally rely on proprietary flash arrays or custom NVMe‑over‑Fabric solutions that demand specialized clients and steep integration costs. xiNAS challenges that model by marrying a pure‑software RAID stack with the mature XFS file system and NFS over RDMA, offering a familiar POSIX interface while unlocking raw NVMe bandwidth for GPU‑driven workloads.

Xinnor’s validation on a Supermicro AS‑1116CS‑TN platform showcases the practical impact of this architecture. A single node with 12 × PCIe Gen5 NVMe SSDs delivered 74.5 GB/s sequential reads and 39.5 GB/s writes, translating to 990 k read IOPS at 265 µs latency and 587 k write IOPS at 430 µs latency. Adding a second node pushed sequential reads to 117 GB/s with near‑linear scaling, while read performance dipped only 8.5 % during an SSD failure and recovered quickly during rebuilds, underscoring the solution’s resilience for AI training pipelines that mix streaming and random‑access patterns.

For storage vendors and cloud providers, xiNAS presents a low‑overhead, standards‑based alternative that can be deployed on off‑the‑shelf hardware. Its reliance on NFS eliminates the need for custom client stacks, accelerating time‑to‑value and reducing operational complexity. As AI models grow in size and training cycles shorten, solutions that combine high throughput, fault tolerance, and open protocols are likely to gain traction, positioning Xinnor as a compelling challenger in the flash‑filer market.

Xinnor's alternative software RAID filer for AI

Read Original Article
0

Comments

Want to join the conversation?

Loading comments...