
The optical T100 could break silicon’s performance ceiling, delivering unprecedented AI compute while reducing energy consumption, reshaping the GPU market.
Optical computing has moved from laboratory prototypes to commercial ambition, promising to overcome the physical constraints that have slowed silicon scaling for over a decade. As transistor dimensions approach atomic limits, the industry’s reliance on Moore’s Law is eroding, prompting investors and technologists to explore photons as carriers of information. Bill Gates’ venture backing of Neurophos signals a rare convergence of deep‑pocket capital and visionary research, lending credibility to a sector that has long struggled to achieve mass‑market relevance.
The Tulkas T100, Neurophos’ flagship optical GPU, advertises a peak performance of 470 petaFLOPS—an order of magnitude beyond today’s leading silicon GPUs. Built on optical transistors, the chip routes light instead of electrons, dramatically reducing latency and power draw. Integrated high‑capacity RAM and an SSD module enable on‑chip data storage, eliminating bottlenecks that plague conventional GPU pipelines during large‑scale AI model training. Early benchmarks suggest the processor can accelerate transformer workloads while maintaining a fraction of the energy footprint of comparable Nvidia or AMD solutions.
If the T100 scales to production, data‑center operators could rethink server architecture, consolidating compute, memory, and storage into a single optical module. Such a shift would lower total cost of ownership, accelerate AI research cycles, and open new markets for hyperscale cloud providers. However, challenges remain: manufacturing optical transistors at volume, integrating with existing software stacks, and proving reliability under continuous load. Success would not only upend the GPU landscape but also validate optical computing as a viable successor to silicon, reshaping the broader semiconductor ecosystem.
Comments
Want to join the conversation?
Loading comments...