Memory-Centric Computing: Eda Workshop Keynote Speech by Prof. Onur Mutlu, 08.05.2023

Onur Mutlu Lectures
Onur Mutlu LecturesFeb 25, 2026

Why It Matters

By rethinking where computation occurs, memory‑centric designs can dramatically lower latency and energy, reshaping data‑intensive industries. The approach addresses critical security and scalability challenges that traditional von Neumann systems struggle with.

Key Takeaways

  • Memory‑centric designs shift compute toward storage
  • Processing‑in‑memory reduces data movement energy
  • RowHammer vulnerabilities highlight need for resilient memory
  • Emerging architectures promise AI and genomics acceleration
  • Academic resources expand research on intelligent memory systems

Pulse Analysis

Memory‑centric computing is gaining traction as the industry confronts the "memory wall"—the growing disparity between processor speed and memory bandwidth. By integrating compute units directly within DRAM or emerging non‑volatile memories, processing‑in‑memory (PIM) architectures eliminate costly data shuttling across buses. This not only slashes energy consumption but also unlocks new performance ceilings for workloads that are heavily data‑bound, such as deep‑learning inference, real‑time analytics, and large‑scale scientific simulations. Researchers like Prof. Mutlu have demonstrated prototype systems where simple arithmetic can be performed inside memory arrays, achieving orders‑of‑magnitude speedups over conventional designs.

Reliability and security remain paramount as memory becomes an active compute substrate. The RowHammer phenomenon—where repeated accesses to a row can induce bit flips in adjacent rows—exemplifies the vulnerabilities introduced by tighter memory‑logic integration. Mutlu’s keynote emphasized novel mitigation strategies, including adaptive refresh, in‑memory error‑correcting codes, and architectural safeguards that detect and throttle malicious access patterns. These techniques are essential for maintaining data integrity in PIM‑enabled servers, especially as they scale to exascale data centers and edge devices where fault tolerance is non‑negotiable.

The commercial implications are profound. Cloud providers, AI chip designers, and genomics firms are investing in memory‑centric prototypes to reduce operational costs and accelerate time‑to‑insight. Industry roadmaps now feature hybrid memory cubes, 3D‑stacked DRAM with compute layers, and emerging technologies like resistive RAM that natively support logic operations. As the ecosystem matures—bolstered by open‑source curricula, lecture series, and seminal papers—the gap between academic research and production silicon narrows, positioning memory‑centric computing as a cornerstone of next‑generation high‑performance and energy‑efficient systems.

Original Description

Title: Memory-Centric Computing: eda Workshop Keynote
Presenter: Professor Onur Mutlu (https://people.inf.ethz.ch/omutlu/)
Date: May 8, 2023
Recommended Reading:
====================
A Modern Primer on Processing in Memory
Intelligent Architectures for Intelligent Computing Systems
RowHammer: A Retrospective
Fundamentally Understanding and Solving RowHammer
RECOMMENDED LECTURE VIDEOS & PLAYLISTS:
========================================
Computer Architecture Fall 2021 Lectures Playlist:
Digital Design and Computer Architecture Spring 2021 Livestream Lectures Playlist:
Featured Lectures:
Interview with Professor Onur Mutlu:
The Story of RowHammer Lecture:
Accelerating Genome Analysis Lecture:
Memory-Centric Computing Systems Tutorial at IEDM 2021:
Intelligent Architectures for Intelligent Machines Lecture:
Computer Architecture Fall 2020 Lectures Playlist:
Digital Design and Computer Architecture Spring 2020 Lectures Playlist:
Public Lectures by Onur Mutlu, Playlist:
Computer Architecture at Carnegie Mellon Spring 2015 Lectures Playlist:
Rethinking Memory System Design Lecture @stanfordonline :

Comments

Want to join the conversation?

Loading comments...