Memory-Centric Computing - Invited Talk - Systems Research Community @ France - 29.11.2022
Why It Matters
As data volumes continue to outpace processing capacity, rethinking architectures to minimize data movement could unlock large performance and energy gains for AI, genomics, IoT, and other industries; early adoption of near‑memory and accelerator-driven designs may determine competitive advantage.
Summary
In an invited talk on memory-centric computing, ETH professor Uno (likely Onur?) argued that modern systems are increasingly bottlenecked by data movement rather than compute, driven by exponential growth in datasets from domains like neural networks and genomics. He highlighted how cheap, high-throughput data generators—exemplified by nanopore sequencers and ubiquitous sensors—produce more data than current general-purpose architectures can efficiently analyze. Uno surveyed approaches to shift computation closer to memory and data sources, including near-memory FPGA accelerators with high-bandwidth memory, and sketched system-design opportunities for specialized pipelines in genomics and other data‑intensive fields. The talk emphasized both the practical promise and the programming and design challenges of moving toward memory‑centric architectures.
Comments
Want to join the conversation?
Loading comments...