Huawei Targets AI Data Centre Reliability with Xinghe AI Fabric 2.0

Huawei Targets AI Data Centre Reliability with Xinghe AI Fabric 2.0

Developing Telecoms
Developing TelecomsMar 12, 2026

Key Takeaways

  • Xinghe AI Fabric 2.0 adds AI-driven fault detection.
  • AI Eagle‑Eye monitors 200k flows in real time.
  • StarryWing Digital Map unifies multi‑vendor network management.
  • NSLB/NPLB boost GPU bandwidth utilization to 90‑98%.
  • Liquid‑cooled XH9230‑LC cuts rack cooling needs 60%.

Summary

At Mobile World Congress, Huawei unveiled Xinghe AI Fabric 2.0, a three‑layer AI‑centric architecture designed to improve reliability and performance of AI‑heavy data centres. The suite introduces Rock‑Solid Architecture 2.0 with an AI Eagle‑Eye engine that can monitor up to 200,000 service flows and pinpoint hidden faults within minutes. Integrated components such as StarryWing Digital Map 2.0 and iMaster NCE enable unified management of multi‑vendor environments, while Xinghuan AI Turbo 2.0’s load‑balancing algorithms raise GPU bandwidth utilization to 90‑98 %. Huawei also launched a liquid‑cooled XH9230‑LC switch, claiming up to 60 % reduction in rack cooling demand.

Pulse Analysis

AI‑driven workloads are reshaping the economics of modern data centres, where even marginal network inefficiencies translate into millions of dollars in idle GPU spend. Huawei’s Xinghe AI Fabric 2.0 responds by embedding intelligence at every layer—from the AI Brain decision engine to the connectivity fabric—allowing real‑time analysis of hundreds of thousands of traffic flows. This architecture not only reduces mean‑time‑to‑repair for obscure faults but also creates a data‑rich substrate for predictive maintenance, a capability increasingly demanded by enterprises scaling AI services.

Operational complexity rises as organizations adopt dual‑vendor strategies to mitigate supply‑chain risk. Huawei’s StarryWing Digital Map 2.0, paired with the iMaster NCE controller, offers a single pane of glass that normalises disparate device models and integrates with existing ITSM tools. By automating firewall policy validation and providing a common API across vendors, the platform cuts manual configuration errors and accelerates rollout of new AI clusters, directly supporting the industry’s shift toward heterogeneous networking ecosystems.

Performance optimisation remains paramount, especially for GPU‑intensive training and inference. The Xinghuan AI Turbo 2.0 suite’s NSLB and NPLB algorithms claim up to 98 % bandwidth utilisation, dramatically improving compute‑to‑data pipelines. Coupled with the liquid‑cooled XH9230‑LC switch—promising a 60 % reduction in rack‑level cooling demand—Huawei addresses both throughput and power‑efficiency constraints. Looking ahead, the NetMaster autonomous operations platform hints at a future where AI resolves 80 % of network incidents without human input, positioning Huawei as a potential leader in self‑healing data‑centre networks.

Huawei targets AI data centre reliability with Xinghe AI Fabric 2.0

Comments

Want to join the conversation?