AI News and Headlines
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

AI Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
AINewsNew Devices Might Scale the Memory Wall
New Devices Might Scale the Memory Wall
AINanotech

New Devices Might Scale the Memory Wall

•February 9, 2026
0
IEEE Spectrum AI
IEEE Spectrum AI•Feb 9, 2026

Companies Mentioned

Google DeepMind

Google DeepMind

Why It Matters

By moving computation into memory, bulk RRAM could cut latency and power for AI workloads, accelerating edge intelligence while reducing reliance on cloud resources.

Key Takeaways

  • •Bulk RRAM eliminates filament formation, simplifying integration
  • •40 nm devices stacked up to eight layers
  • •Eight-layer stack supports 64 resistance levels, megaohm range
  • •1 KB selector‑free array achieved 90% continual‑learning accuracy
  • •Retention at high temperatures remains a key challenge

Pulse Analysis

The memory wall—where data movement between processor and storage throttles AI performance—has driven intense research into in‑memory computing. Bulk resistive RAM offers a fresh approach by abandoning the noisy filament‑formation step that has plagued traditional RRAM. By switching an entire dielectric layer, the new devices operate at lower voltages, eliminate the need for selector transistors, and can be densely stacked in three dimensions, a critical advantage for future chip architectures.

At the IEEE International Electron Device Meeting, UC San Diego demonstrated a nanoscale bulk RRAM array that scales to eight layers, each cell capable of 64 distinct resistance levels in the megaohm range. This granularity enables analog matrix‑vector multiplication—a core operation in neural networks—directly within the memory array. The researchers assembled a 1‑kilobyte, selector‑free stack and ran a continual‑learning task on wearable sensor data, achieving 90 % accuracy, on par with conventional digital implementations. Such performance suggests that edge devices could train and adapt models locally, reducing latency and preserving data privacy.

Despite the promise, practical deployment faces hurdles. While the bulk RRAM retains data for years at room temperature, its stability at the elevated temperatures typical of processor environments is still unproven. Overcoming this reliability gap will be essential for integrating the technology into commercial AI accelerators. If resolved, bulk RRAM could reshape the memory hierarchy, delivering faster, more energy‑efficient AI inference and learning across a spectrum of applications from smartphones to autonomous systems.

New Devices Might Scale the Memory Wall

Read Original Article
0

Comments

Want to join the conversation?

Loading comments...