Memory-Centric Computing - Talk at IEEE Custom Integrated Circuits Conference - Prof. Onur Mutlu

Onur Mutlu Lectures
Onur Mutlu LecturesFeb 24, 2026

Why It Matters

Memory‑centric computing tackles the dominant energy and latency bottleneck of data movement, enabling scalable AI and genomics workloads while dramatically reducing operational costs.

Key Takeaways

  • Data movement dominates energy use in modern systems
  • Genomics and AI workloads outpace traditional processing capabilities
  • Near‑memory compute can cut latency by over ten‑fold
  • Specialized memory‑centric architectures yield two orders of magnitude efficiency gains
  • Intelligent controllers can adapt using runtime data semantics

Summary

Prof. Onur Mutlu opened the IEEE Custom Integrated Circuits talk by framing memory‑centric computing as a response to exploding data volumes in AI, genomics, and other data‑intensive workloads. He highlighted that while CPUs, GPUs, and accelerators have grown more powerful, the cost of moving data between storage, memory, and compute now eclipses actual processing, with studies showing 60‑90% of system energy consumed by data movement.

The lecture surveyed concrete examples, from nanopore genome sequencers that generate terabytes of raw reads to large language models that demand ever‑larger memory footprints. Mutlu described prototype systems that attach reconfigurable logic to high‑bandwidth memory (HBM) – such as IBM‑partnered NFPJ boards – achieving >10× performance and >100× energy‑efficiency improvements for genome analysis and weather modeling. He also noted emerging techniques like in‑SSD filtering and near‑memory analytics that further shrink data transfer overhead.

Mutlu emphasized a paradigm shift: moving from processor‑centric designs, where caches and interconnect dominate silicon area, to data‑centric architectures that embed compute within memory stacks. He argued that future controllers should become data‑aware agents, leveraging metadata (security, compressibility, locality) and even reinforcement‑learning policies to make smarter scheduling decisions, thereby reducing unnecessary traffic.

The implications are clear: to sustain the growth of AI and scientific workloads, industry must redesign chips to process data where it resides. Memory‑centric solutions promise dramatic gains in performance, energy, and cost, while intelligent, data‑driven system components could unlock further efficiencies across servers, edge devices, and mobile platforms.

Original Description

Memory-Centric Computing
Talk at IEEE Custom Integrated Circuits Conference
Presenter: Professor Onur Mutlu (https://people.inf.ethz.ch/omutlu/)
Date: April 23, 2023
Slides (pptx):
Slides (pdf):
Recommended Reading:
====================
A Modern Primer on Processing in Memory
Intelligent Architectures for Intelligent Computing Systems
RowHammer: A Retrospective
Fundamentally Understanding and Solving RowHammer
RECOMMENDED LECTURE VIDEOS & PLAYLISTS:
========================================
Computer Architecture Fall 2021 Lectures Playlist:
Digital Design and Computer Architecture Spring 2021 Livestream Lectures Playlist:
Featured Lectures:
Interview with Professor Onur Mutlu:
The Story of RowHammer Lecture:
Accelerating Genome Analysis Lecture:
Memory-Centric Computing Systems Tutorial at IEDM 2021:
Intelligent Architectures for Intelligent Machines Lecture:
Computer Architecture Fall 2020 Lectures Playlist:
Digital Design and Computer Architecture Spring 2020 Lectures Playlist:
Public Lectures by Onur Mutlu, Playlist:
Computer Architecture at Carnegie Mellon Spring 2015 Lectures Playlist:
Rethinking Memory System Design Lecture @stanfordonline :

Comments

Want to join the conversation?

Loading comments...