Reducing data movement by making memory the focal point of system design can dramatically improve performance and energy efficiency for data-intensive applications, enabling faster, cheaper analysis of exploding data sources like genomics and AI. Adopting memory-centric architectures and online error correction is therefore critical for scaling future compute infrastructure and supporting edge-to-cloud workflows.
Professor Onur Mutlu outlined the case for memory-centric computing, arguing that modern workloads—especially machine learning and genomics—generate far more data than current systems can efficiently process. He highlighted trends like wafer-scale processor designs and high-bandwidth memory attachments as steps toward co-locating large memory and computation, and emphasized the growing importance of online error correction in memory systems. Mutlu stressed that many data-generating devices (e.g., nanopore sequencers, cameras, edge sensors) are strong at collection but weak at analysis, causing costly data movement to distant general-purpose servers. He sketched accelerator-based solutions (FPGAs with HBM) and architectural shifts to push computation closer to memory and the edge.
Comments
Want to join the conversation?
Loading comments...