"Can We Do Better?" Prof. Onur Mutlu's MICRO 2025 Keynote Talk at Seoul - 21.10.2025

Onur Mutlu Lectures
Onur Mutlu LecturesApr 15, 2026

Why It Matters

Memory‑centric designs can slash data‑movement energy and latency, directly lowering operating costs for AI, cloud, and genomics workloads.

Key Takeaways

  • Memory dominates energy and performance costs in modern computing systems.
  • Processor‑centric designs waste up to 90% of system energy on data movement.
  • Shifting to memory‑centric or processing‑in‑memory architectures can cut latency.
  • Paradigm change requires new hardware, software, and theoretical models.
  • Real‑world studies (Google, edge TPU) confirm memory bottleneck across workloads.

Summary

Prof. Onur Mutlu’s MICRO 2025 keynote, titled “Can We Do Better?”, framed the memory bottleneck as the central obstacle to energy‑efficient, high‑performance computing. He argued that while the industry has long treated computing as an energy problem, the majority of that energy is consumed by memory and data movement, a reality that is worsening with data‑intensive AI and genomics workloads.

Mutlu presented compelling data: Google’s 2015 data‑center analysis showed processors spend only 10‑20% of cycles on useful work, with the rest waiting for memory. Subsequent studies on edge‑TPU accelerators revealed over 90% of system energy is spent on memory accesses. He highlighted the stark energy disparity—memory accesses can cost thousands of times more than a single arithmetic operation—making the current processor‑centric paradigm unsustainable.

He invoked historical paradigm shifts, likening the move to memory‑centric computing to the Copernican revolution, and cited early research from the 1960s on near‑memory processing. The talk emphasized that achieving a true shift requires redesigning hardware interfaces, compilers, programming models, and even the theoretical foundations of computation to prioritize data movement alongside operation counts.

The implication for industry is clear: data‑center operators, AI developers, and hardware vendors must invest in processing‑in‑memory and autonomous memory accelerators to curb energy costs, improve latency, and sustain scaling. Companies that adopt memory‑centric architectures early will gain competitive advantages in performance, operational expenditure, and sustainability.

Original Description

Title: Can We Do Better?
Presenter: Professor Onur Mutlu (https://people.inf.ethz.ch/omutlu/)
Date: October 21, 2025
Venue: MICRO 2025 Keynote Talk at Seoul
Slides (pdf):
Recommended Reading:
====================
A Modern Primer on Processing in Memory
Memory-Centric Computing: Solving Computing's Memory Problem
Memory-Centric Computing: Recent Advances in Processing-in-DRAM
Intelligent Architectures for Intelligent Computing Systems
RowHammer: A Retrospective
Fundamentally Understanding and Solving RowHammer
Accelerating Genome Analysis via Algorithm-Architecture Co-Design
From Molecules to Genomic Variations: Accelerating Genome Analysis via Intelligent Algorithms and Architectures
RECOMMENDED LECTURE VIDEOS & PLAYLISTS:
========================================
Digital Design and Computer Architecture Spring 2025 Livestream Lectures Playlist:
Fundamentals of Computer Architecture Fall 2025 Livestream Lectures Playlist:
Seminar in Computer Architecture Spring 2025 Livestream Lectures Playlist:
Computer Architecture Fall 2024 Lectures Playlist:
Interview with Professor Onur Mutlu:
TCuARCH meets Prof. Onur Mutlu
Arch. Mentoring Workshop @ISCA'21 - Doing Impactful Research
The Story of RowHammer Lecture:
Accelerating Genome Analysis Lecture:
Memory-Centric Computing Systems Tutorial at IEDM 2021:
Intelligent Architectures for Intelligent Machines Lecture:
Featured Lectures:

Comments

Want to join the conversation?

Loading comments...