AI News and Headlines
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

AI Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
AINewsScientists Say They've Eliminated a Major AI Bottleneck — Now They Can Process Calculations 'at the Speed of Light'
Scientists Say They've Eliminated a Major AI Bottleneck — Now They Can Process Calculations 'at the Speed of Light'
AI

Scientists Say They've Eliminated a Major AI Bottleneck — Now They Can Process Calculations 'at the Speed of Light'

•November 24, 2025
0
Live Science AI
Live Science AI•Nov 24, 2025

Companies Mentioned

Google

Google

GOOG

OpenAI

OpenAI

Anthropic

Anthropic

xAI

xAI

Meta

Meta

META

Why It Matters

By removing the speed and energy constraints of current AI hardware, POMMM enables larger, more capable models at lower operational cost, giving early adopters a decisive competitive edge in the AI race.

Key Takeaways

  • •POMMM processes multiple tensors with one laser pulse
  • •Optical computing now rivals GPU parallelism
  • •Prototype reduces energy use by eliminating active switching
  • •Integration onto photonic chips expected in 3‑5 years
  • •Could enable faster, greener training of massive AI models

Pulse Analysis

Optical computing has long promised speed and efficiency, but its inability to run operations in parallel kept it from displacing GPUs in AI workloads. Traditional photonic designs require sequential laser scans, creating a hard ceiling on tensor‑processing throughput. The POMMM architecture flips this paradigm by encoding data into the amplitude and phase of light, allowing a single burst to perform matrix‑matrix multiplications across many tensors simultaneously. This passive propagation eliminates the need for active switching, dramatically cutting power draw while delivering near‑light‑speed computation.

The performance gains translate into tangible business advantages. Data‑center operators could shrink clusters that currently rely on thousands of GPUs, reducing capital expenditure and electricity bills. Moreover, the lower thermal footprint eases cooling requirements, opening the door for dense, edge‑deployed AI accelerators. Industry analysts estimate that integrating POMMM onto silicon‑photonic chips within the next three to five years could accelerate model training cycles by an order of magnitude, making rapid experimentation more affordable for startups and research labs alike.

Beyond cost savings, the technology reshapes strategic roadmaps for AI development. Faster tensor processing removes a key barrier to scaling models toward artificial general intelligence, a goal championed by some leading labs. While skeptics argue that scaling alone won’t achieve AGI, the ability to train ever‑larger networks with minimal energy could spur new algorithmic breakthroughs. As major cloud providers and chip manufacturers evaluate photonic solutions, POMMM positions optical computing as a viable, future‑proof alternative to electronic accelerators, potentially redefining the competitive landscape of AI hardware.

Scientists say they've eliminated a major AI bottleneck — now they can process calculations 'at the speed of light'

Read Original Article
0

Comments

Want to join the conversation?

Loading comments...