AI News and Headlines
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

AI Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
AINewsAWS Networking Boss Talks Roadmap, Hollow Core Fiber, and Data Center Future
AWS Networking Boss Talks Roadmap, Hollow Core Fiber, and Data Center Future
Big DataAI

AWS Networking Boss Talks Roadmap, Hollow Core Fiber, and Data Center Future

•February 9, 2026
0
Data Center Knowledge
Data Center Knowledge•Feb 9, 2026

Companies Mentioned

Amazon

Amazon

AMZN

Anthropic

Anthropic

NVIDIA

NVIDIA

NVDA

Nasdaq

Nasdaq

NDAQ

Why It Matters

By reshaping its network architecture, AWS ensures latency‑sensitive AI and financial workloads can scale without bottlenecks, reinforcing its market dominance in cloud infrastructure.

Key Takeaways

  • •AWS plans $200 billion 2026 capex, heavy networking focus
  • •Hollow‑core fiber deployed in 5‑10 sites for latency
  • •New control plane enables sub‑second failure recovery
  • •In‑house ASICs provide uniform hardware, faster software updates
  • •AWS builds nanosecond‑accurate time service for finance workloads

Pulse Analysis

AWS’s $200 billion 2026 capex underscores a strategic shift toward networking excellence, recognizing that AI workloads demand unprecedented bandwidth and ultra‑low latency. The investment fuels not only traditional fiber upgrades but also experimental hollow‑core fiber, a photonic technology that reduces signal loss and expands the feasible distance between data‑center clusters. By deploying this fiber in a select few metro sites, AWS can keep inter‑zone latency under half a millisecond, a critical threshold for large language model training and inference.

A cornerstone of AWS’s roadmap is its next‑generation control plane, engineered to manage the exponential growth in device count and optical links. This software‑defined layer delivers sub‑second failure recovery and consistent configuration across hundreds of thousands of links, eliminating the scaling limits of legacy control mechanisms. Coupled with vertically integrated ASICs and custom networking hardware, AWS can push firmware updates fleet‑wide, streamline provisioning, and extract efficiency gains at scale, reinforcing its ability to offer seamless, high‑performance connectivity for multi‑cloud customers.

Beyond raw performance, AWS is building a high‑precision timing service that delivers nanosecond accuracy, unlocking use cases such as distributed databases and financial exchanges previously constrained by clock drift. Energy efficiency remains a priority; by optimizing watts‑per‑bit through custom silicon and dynamic power scaling, AWS reduces operational costs and environmental impact. Looking ahead, the company anticipates liquid‑cooled networking gear and co‑packaged optics becoming mainstream, further tightening the integration between compute, storage, and network layers as AI continues to drive the cloud market forward.

AWS Networking Boss Talks Roadmap, Hollow Core Fiber, and Data Center Future

AWS didn’t become the leading global cloud provider by playing it safe.

The company is doubling down on its AI infrastructure efforts, with a $200 billion capital expenditure plan for 2026. Much of that investment will be funneled into its networking services portfolio.

AWS isn’t just dropping $200 billion for the sake of it – it’s rewriting the physics of its network to rein in latency and stave off potential bottlenecks. With emerging technologies like hollow‑core fiber, continued emphasis on its in‑house hardware, and a redesigned control plane, the company is aiming to set the standard for multi‑cloud well into the future.

AWS has built a layered networking ecosystem and is ramping up data‑center power capacity. In its Q3 2025 earnings call, Amazon CEO Andy Jassy said AWS added 3.8 GW of data‑center capacity in 2025 alone.

And it has good reason to focus all that energy on AI infrastructure.

Related: Project Rainer: AWS, Anthropic Complete Massive AI Supercomputing Cluster

The company’s networking services portfolio has seen major enterprise demand, commanding strong year‑over‑year growth. According to Polaris Market Research, the global multi‑cloud networking market is anticipated to grow to $36.5 billion by 2034, reflecting a shift in enterprise IT architecture to meet the needs of the AI arms race.

Matt Rehder, vice president of AWS core networking, sat for a wide‑ranging interview with Data Center Knowledge. He noted that the company is taking bold steps, including firing up hollow‑core fiber – an emerging challenger to traditional fiber optics – to expand its networking arsenal for metro areas.

Image 1: Matt_Rehder_headshot_(2).jpg

Matt Rehder, AWS vice president for core networking

The following is a partial transcript of the interview with Rehder about the future of AWS, edited for brevity and clarity:

DCK: AWS has outlined major CapEx plans for 2026, with networking set to benefit heavily. How do emerging technologies like hollow‑core fiber fit into spending at that scale, and what’s the end goal?

Rehder: What we’re seeing – driven by both generative AI and traditional cloud workloads – is accelerated customer growth across the board, and that translates directly into demand for more bandwidth.

That demand shows up in two ways. First, every server we deploy needs to connect to the network, and the bandwidth per server continues to rise over time. Second, all of our data centers must interconnect – within availability zones, across zones, between regions, and externally. That sustained bandwidth growth is something we’ve seen for years, but AI has clearly accelerated it.

Our priorities are availability, reliability, and resiliency. If the network doesn’t work, nothing else matters. The core objective is scale without constraint. We never want networking to get in the way of the business. That means having enough ports, enough bandwidth, and enough elasticity so customers don’t have to think about the network at all.

Related: AWS Launches Trainium3 Chip to Challenge Nvidia’s AI Dominance

DCK: Hollow‑core fiber was long considered impractical due to concerns about cost and supply. What changed, and where is AWS actually deploying it today?

Rehder: Hollow‑core fiber has been talked about for most of my 25‑year career, usually as a theoretical idea. We always knew it was physically possible, but it wasn’t manufacturable at scale.

That started to change four or five years ago as academic research improved production techniques. Even now, it’s still a nascent technology. The two hard problems are manufacturability – can you produce long, reliable spans of fiber? – and cost.

The primary use case for us is long‑distance interconnect. AWS availability zones are composed of multiple data centers that customers treat as one logical facility. To make that work, we need latency under roughly half a millisecond. That constraint limits how far apart facilities can be.

Hollow‑core fiber lets us widen that radius. It gives us more flexibility when land or power isn’t available close enough together. Today, it’s significantly more expensive than traditional fiber, but if it enables expansion where we otherwise couldn’t build, it can still be the right trade‑off.

Related: Forget Quantum? Why Photonic Data Centers Could Arrive First

We’re using it in a very small number of locations—on the order of five to ten—specifically where geographic constraints exist. Longer term, if costs come down, I expect hollow core to become much more common. Beyond latency, it has lower signal loss, which can support higher bandwidth or reduce amplification needs.

DCK: Inside the data center, AI workloads have changed the game. What new networking bottlenecks are you seeing at scale?

Rehder: Two stand out.

First is control‑plane scalability. ML servers require two to three times more bandwidth per server than traditional CPU‑based systems. As we scale networks to meet that demand, the number of devices and optical links grows dramatically.

At that point, traditional control‑plane approaches stop working well. Recovery times increase, convergence slows, and you hit algorithmic limits. Around 2020, we built a new control plane specifically designed for ML networks. It enables sub‑second recovery from failures, consistent programming across thousands of devices, and scalability to hundreds of thousands of links without hitting cliffs.

That system is now becoming the foundation for all of our networks, not just ML, because it’s fundamentally better.

The second challenge is cabling. At hyperscale, you can have hundreds of thousands of physical links in a single data center. That creates issues around weight, routing, deployment speed, and long‑term maintenance.

We’ve invested in better tracking systems, improved cable designs, and new connector technologies that aggregate many fibers into a single connection. That reduces deployment time and improves reliability at scale.

DCK: AWS designs much of its own networking hardware. What advantages does that vertical integration provide?

Rehder: We started developing our own networking hardware about 15 years ago, initially just for server connectivity. Today, nearly our entire network – from top‑of‑rack switches to backbone and internet edge – runs on our own devices.

The biggest advantage is consistency. We use the same fundamental building block everywhere: the same ASIC, form factor, and operating system. That simplifies supply chains and lets us apply software improvements across the entire network at once.

It also enables capabilities we couldn’t build otherwise. Our control plane, for example, runs partly on the devices themselves. That wouldn’t be possible with off‑the‑shelf gear.

Operationally, it improves provisioning, monitoring, and repair. We can automate testing, pull exactly the telemetry we want, and trigger remediation automatically. Every incremental improvement scales across the whole network.

DCK: AWS has also built a high‑precision time service. Why was that necessary, and what does it unlock?

Rehder: Around 2019, we started focusing on time precision. Standard approaches like NTP can be off by seconds, which creates real problems in large distributed systems, especially for consistency and ordering.

Software‑only solutions can’t overcome network variability, so we built a hardware‑based time network that runs alongside our data network. Each data center has an atomic clock synchronized via GPS. Specialized devices distribute a timing pulse, and hardware on every server – using our Nitro platform – receives that pulse with nanosecond‑level accuracy.

That enables microsecond‑level precision in software. It unlocks new capabilities like highly consistent distributed databases and makes workloads such as financial exchanges viable in the cloud. Nasdaq has already demonstrated how an exchange could run on top of this architecture. That simply wasn’t possible a decade ago.

DCK: With power and cooling constraints intensifying, how much do energy limits shape your networking roadmap?

Rehder: Energy doesn’t limit our roadmap, but efficiency is a major focus. We look closely at watts per bit – how much power it takes to move data.

Because we control our hardware, we can optimize at very fine levels: fan algorithms, component choices, and dynamic power scaling based on load. The gains per device may be small, but across thousands of switches and many data centers, they add up to meaningful reductions in total power use.

That benefits the environment, customers, and our cost structure.

DCK: Looking three to five years out, what networking assumptions common today will be obsolete by the end of the decade?

Rehder: Two major shifts stand out.

First, liquid cooling will become standard for network devices, not just servers. Mixing liquid‑cooled servers with air‑cooled networking adds complexity, and liquid offers efficiency advantages.

Second, optics integration will change. Today’s pluggable optics provide flexibility and serviceability, which is valuable at scale. Fully co‑packaged optics have long been discussed but struggled with reliability and operational trade‑offs.

I think the industry will move toward co‑packaged connectors instead – integrating connectors closer to the ASIC while keeping optical engines modular. That delivers efficiency gains without sacrificing supplier diversity, which is critical at AWS scale.

DCK: Finally, what should AWS customers expect on the networking side in 2026?

Rehder: Ideally, more invisibility. More capacity, more bandwidth, lower latency, less packet loss, and less jitter.

Customers should see continued expansion in capacity, better performance, and tighter integration with compute, storage, and accelerated instances. Our goal is simple: make sure the network never gets in the way of what customers want to build.

Read Original Article
0

Comments

Want to join the conversation?

Loading comments...