MCCSys-5: 5th Workshop on Memory-Centric Computing Systems, Held with ASPLOS 2026 - 23 March 2026
Why It Matters
Addressing the memory bottleneck is essential for scaling AI and data‑intensive applications while curbing energy costs, making memory‑centric designs a strategic priority for the computing industry.
Key Takeaways
- •Memory bottleneck dominates performance, energy, and cost in modern systems.
- •Processing-in-memory (PIM) can reduce data movement overhead dramatically.
- •Current architectures waste >90% energy on memory traffic for AI workloads.
- •Workshop highlights research on autonomous memory management and security attacks.
- •Upcoming MCCSys-7 will accept papers for inclusion in ASPLOS proceedings.
Summary
The fifth Memory‑Centric Computing Systems (MCCSys‑5) workshop, co‑located with ASPLOS 2026, gathered researchers to confront the growing memory bottleneck that now dominates performance, energy consumption, and hardware cost across data‑intensive workloads. Organizers outlined the agenda—keynotes on memory‑centric architectures, recent advances in processing‑in‑memory (PIM) using DRAM chips, and sessions on side‑channel and fault‑injection attacks—while inviting submissions for the upcoming MCCSys‑7 proceedings.
Presentations underscored that moving data between storage, memory, caches, and registers now accounts for the majority of system energy and latency. Empirical studies cited from Google’s data‑center and mobile workloads revealed that 60‑90% of total energy is spent on memory traffic, a figure that spikes to over 90% for edge neural networks and is expected to worsen with large language models. The disparity between a memory access (hundreds to thousands of times more energy‑intensive) and a simple compute operation was highlighted, illustrating why traditional processor‑centric designs are increasingly unsustainable.
Speakers used vivid analogies—a “10‑year‑old kid” test—to illustrate that despite dedicating 90‑95% of hardware real estate to data movement, memory remains the system’s choke point. Professor Phil Gibbons and others presented concrete PIM prototypes that autonomously manage memory, reducing data movement and mitigating security vulnerabilities such as side‑channel attacks. The workshop also featured a candid discussion on venue accessibility, emphasizing the need to lower barriers for under‑21 participants.
The implications are clear: future high‑performance and energy‑efficient computing will require a paradigm shift toward memory‑centric designs, distributed computation across memory and other near‑data components, and robust security models. Researchers are urged to contribute novel PIM architectures and security analyses to the forthcoming MCCSys‑7, which will publish accepted work in the ASPLOS proceedings, shaping the next generation of data‑centric systems.
Comments
Want to join the conversation?
Loading comments...