Startups Bring Optical Metamaterials to AI Data Centers

Startups Bring Optical Metamaterials to AI Data Centers

IEEE Spectrum AI
IEEE Spectrum AIMar 19, 2026

Why It Matters

The technologies could slash power consumption while multiplying bandwidth and AI processing capacity, reshaping data‑center economics and accelerating the shift to optical computing.

Key Takeaways

  • Lumotive chip steers light without moving parts
  • Scalable to 10,000×10,000 optical ports
  • Neurophos modulators 1/10,000 size of silicon photonics
  • Claims 50× compute density and energy efficiency
  • Targeting AI acceleration in data centers by 2028

Pulse Analysis

Data centers are hitting a bandwidth wall as AI workloads demand ever‑higher throughput. Conventional electronic switches incur latency and power penalties because data must repeatedly convert between electrons and photons. Optical circuit switches promise to break this bottleneck, yet existing silicon‑photonic and MEMS solutions struggle with energy efficiency or reliability. Metamaterials—engineered structures smaller than the wavelength of light—offer a new lever, enabling precise, programmable manipulation of optical signals on a chip‑scale platform.

Lumotive’s recent metasurface chip exemplifies this shift. By embedding copper patterns and liquid‑crystal layers using standard foundry processes, the device can dynamically steer, focus, and split light beams without any mechanical movement, dramatically improving reliability. The architecture supports up to 10,000 × 10,000 ports, a scale that could replace bulky optical switch fabrics and reduce the number of transceiver modules in a rack. With a commercial launch slated for the end of 2026, hyperscalers are poised to evaluate the technology as a drop‑in upgrade for high‑speed interconnects, potentially cutting both capital and operating expenses.

Neurophos tackles a complementary challenge: the compute density gap in optical AI accelerators. Its CMOS‑compatible modulators compress a million optical transistors onto a 5 mm² die, a size reduction that would make a silicon‑photonic equivalent span a square meter. The company asserts a 50‑fold improvement in both compute density and energy efficiency over Nvidia’s Blackwell GPUs, positioning the chip as a viable alternative for power‑constrained AI training and inference. Proof‑of‑concept units are slated for evaluation this year, with volume production expected by mid‑2028. If the performance claims hold, optical metamaterials could become a cornerstone of next‑generation, low‑power AI infrastructure, accelerating the industry’s transition from electronic to photonic computing.

Startups Bring Optical Metamaterials to AI Data Centers

Comments

Want to join the conversation?

Loading comments...