

The technology promises a breakthrough in AI inference efficiency, potentially lowering data‑center power costs and challenging Nvidia’s dominance in high‑performance compute.
Photonics has long been touted as the next frontier for high‑performance computing because light can move faster and generate far less heat than electrons. Traditional optical chips, however, suffer from bulky components and costly manufacturing, limiting their adoption in dense data‑center environments. Neurophos tackles these hurdles with a metasurface modulator that shrinks optical transistors by four orders of magnitude, enabling thousands of units to be integrated on a silicon‑compatible die. This approach blends the speed of light with the scalability of existing foundry processes, positioning the company to bridge the gap between laboratory prototypes and commercial silicon photonics.
The startup’s performance claims are striking: a 56 GHz optical processing unit delivering 235 peta‑operations per second while consuming just 675 watts, a ten‑fold efficiency gain over Nvidia’s B200 GPU. Backed by a $110 million funding round that includes Bill Gates’ venture firm and Microsoft’s M12, Neurophos has secured both capital and strategic interest from AI infrastructure leaders. By targeting inference workloads—where power consumption dominates operational costs—the company aims to offer data‑center operators a compelling alternative to silicon‑based GPUs and TPUs, especially as model sizes continue to balloon.
If Neurophos can translate its laboratory results into mass‑produced chips by 2028, the impact on AI economics could be profound. Lower power draw per inference translates directly into reduced electricity bills and cooling requirements, reshaping the cost structure of hyperscale cloud providers. Moreover, the ability to fabricate the OPUs using standard silicon foundry tools mitigates the supply‑chain risks that have plagued other photonic ventures. While Nvidia remains the market leader, a successful rollout of Neurophos’s optical processors would introduce a disruptive, energy‑efficient layer to the AI compute stack, potentially spurring a new wave of hardware innovation across the industry.
Comments
Want to join the conversation?
Loading comments...