AI workloads are reshaping network demand, and telcos that monetize AI‑specific services will capture a fast‑growing revenue stream while avoiding marginalization by hyperscalers.
The panel titled “The AI‑native telco: Capturing revenue opportunities in the AI value chain” examined how telecom operators are being forced to reinvent themselves as AI‑centric infrastructure providers. Speakers highlighted that AI training and inference workloads are turning traditionally north‑south traffic into massive east‑west flows, demanding symmetric capacity, low latency, and deterministic performance across data‑center interconnects and edge sites.
Key insights included the emergence of AI‑specific service‑level agreements (SLAs) that can be sold as premium products, the need for substantial investment in high‑capacity DCI and edge compute, and the strategic importance of “stickiness” – bundling connectivity with AI platform services, data‑management, and risk‑scoring applications. Analysts warned that without moving beyond the classic “big pipe” model, telcos risk being disintermediated by hyperscalers.
Notable examples cited were Orange’s rollout of AI platforms for enterprises, Verizon’s RAM controller for energy optimization, AT&T’s network digital twin for proactive fault detection, and HPE’s vision of offering GPU‑as‑a‑service. Adora emphasized turning SLA guarantees into marketable products, while Francis stressed that past inflection points failed when investment lagged behind use‑case demand.
The implication for operators is clear: to capture AI‑related revenue, they must invest in symmetric, low‑latency infrastructure, develop AI‑focused service portfolios, and embed themselves in the broader AI value chain rather than remaining pure connectivity providers. Those that succeed will secure new, recurring revenue streams and become indispensable partners for enterprises deploying AI at scale.
Comments
Want to join the conversation?
Loading comments...