
GreyBeards on Storage
As AI and cloud workloads demand ever‑higher bandwidth and smarter data‑plane processing, Xsight’s chips offer a scalable, energy‑efficient path to modernize data‑center networking without the complexity of heterogeneous architectures. Their early adoption in high‑profile projects like Starlink and open‑source benchmarks signals a shift toward integrated SDN/DPU solutions that can accelerate time‑to‑market for next‑gen infrastructure.
In this episode, Greybeards host Ted Weatherford and John Carney of Xsight Labs to unpack the company’s two flagship silicon products: the X2 programmable Ethernet switch and the E1 data‑processing unit (DPU). The X2, fabricated on a 5 nm process, packs 3,072 Harvard‑architecture cores and supports 128 × 100 GbE ports, positioning it for extreme‑edge AI inference and high‑throughput data‑center fabrics. Production began in late 2024, with mass‑production slated for 2025 and a target volume exceeding 100,000 units annually. The E1 DPU, built on a 64‑core Neoverse‑2 ARM block, targets AI workloads that require both networking and compute off‑load, delivering a general‑availability roadmap that moves from sampling in mid‑2024 to full volume by mid‑2025.
Technical depth distinguishes Xsight’s approach. By leveraging a Harvard architecture, the X2 eliminates the classic von Neumann bottleneck, offering deterministic timing, reduced jitter, and a power‑per‑bit advantage unheard of in traditional pipeline switches like Broadcom’s Tomahawk or Marvell’s Trident. Latency drops to roughly 450 ns—significantly faster than competing 600‑800 ns solutions—while maintaining programmable flexibility through a Python‑wrapped API and custom assembler libraries. The parallel core design also enables dynamic traffic classification, entropy‑rich hashing, and fine‑grained QoS, allowing customers to avoid hash polarization and optimize link utilization in massive AI‑focused fabrics.
The market context amplifies the relevance of these innovations. With only about ten global teams capable of delivering cutting‑edge ASICs at NVIDIA‑class cadence, Xsight Labs positions itself alongside industry giants like Apple and NVIDIA, promising an “Apple‑scale” product cadence of a new chip every 14 months. Energy constraints in U.S. data centers and the scarcity of high‑performance silicon make low‑power, high‑parallelism solutions critical for scaling AI factories. By offering a fully software‑defined, energy‑efficient switch and DPU stack, Xsight aims to capture a growing segment of edge and data‑center deployments that demand both performance and flexibility, setting the stage for broader adoption and potential public offering.
Xsight Labs talks their latest SDN X2 network switch and E1 DPU chips with the GreyBeards
Comments
Want to join the conversation?
Loading comments...