
MWC 2026: SK Telecom and Panmnesia Sign Partnership to Innovate AI Data Center Architecture, Enhancing Cost Efficiency and Performance
Key Takeaways
- •CXL disaggregates CPU, GPU, memory across rack.
- •Reduces GPU idle time, cuts capital expenses.
- •Eliminates Ethernet overhead, boosts AI training throughput.
- •SK Telecom provides deployment expertise; Panmnesia supplies link solutions.
- •Proof‑of‑concept slated for year‑end, targeting commercialization.
Summary
South Korea’s SK Telecom and AI‑infrastructure specialist Panmnesia have signed an MOU at MWC 2026 to co‑develop a Compute Express Link (CXL)‑based AI rack that disaggregates CPUs, GPUs and memory at the rack level. The partnership targets the high cost and low utilization of traditional fixed‑ratio server designs by replacing Ethernet interconnects with a CXL fabric switch and link controllers. By dynamically allocating resources, the architecture promises higher GPU utilization, lower latency and reduced capital and operational expenditures. Validation with real AI workloads is planned for the end of 2026, followed by proof‑of‑concept deployments.
Pulse Analysis
The rapid expansion of generative AI models has exposed fundamental inefficiencies in conventional data‑center designs, where CPUs, GPUs and memory are locked into fixed‑ratio servers. These monolithic configurations force operators to over‑provision hardware, driving up both capital and power costs while leaving valuable compute cycles idle. Compute Express Link (CXL), a high‑speed, low‑latency interconnect, emerged as a solution that can break these silos, enabling resources to be pooled and allocated on demand across an entire rack.
At MWC 2026, SK Telecom leveraged its massive AI‑service footprint and operational expertise to partner with Panmnesia, a specialist in link‑level semiconductor solutions. Together they are building a CXL‑based AI rack that replaces traditional Ethernet fabrics with a dedicated CXL fabric switch and integrated link controllers. This architecture eliminates data copies and software‑mediated routing, allowing GPUs to communicate directly with memory and each other, which translates into higher throughput, lower latency, and markedly better GPU utilization. Early simulations suggest potential reductions of up to 30% in total cost of ownership for large‑scale AI workloads.
If the joint validation succeeds, the solution could reshape the economics of AI infrastructure worldwide. Enterprises and cloud providers will gain a pathway to scale AI services without the exponential rise in hardware spend, while the disaggregated model aligns with broader industry moves toward composable compute. The planned proof‑of‑concept deployments later this year position SK Telecom and Panmnesia to capture early market share and influence emerging standards, signaling a shift toward more modular, performance‑centric data‑center ecosystems.
Comments
Want to join the conversation?