Build Memory-Aware Agents

Andrew Ng
Andrew NgMar 18, 2026

Why It Matters

Equipping LLMs with persistent memory transforms them into truly autonomous agents, enabling businesses to deploy AI solutions that learn over time and handle long‑term, complex tasks efficiently.

Key Takeaways

  • Memory transforms stateless LLMs into learning agents over time.
  • Oracle AI database provides scalable semantic retrieval for agent memory.
  • Course teaches building memory manager to abstract storage operations.
  • Cognitive operations enable agents to autonomously update their memories.
  • Memory engineering extends beyond prompt and context engineering techniques.

Summary

The video announces a new training course, "Build Memory‑Aware Agents," created in partnership with Oracle and taught by Richmond Alak and Nacho Martinez. It positions memory as the missing piece that converts a stateless large language model into an agent capable of learning, adapting, and handling long‑horizon tasks.

The curriculum covers the most common memory patterns, showing how to construct a full‑stack memory system using Oracle’s AI database. Participants will build a memory manager that abstracts storage and retrieval, implement a semantic‑search layer, and develop cognitive operations that let agents autonomously refine their own knowledge bases.

Key excerpts highlight the shift from traditional prompt‑engineering to a "memory‑first" approach: “We focused on prompt in‑context engineering, but agents that need to work over days or weeks require an effective memory system.” The course promises hands‑on experience building a functional memory‑aware agent by its conclusion.

For developers and enterprises, mastering memory engineering unlocks scalable, persistent AI assistants that can retain context across sessions, reduce repeated prompting costs, and tackle complex, multi‑step workflows—giving a strategic edge in the rapidly evolving AI landscape.

Original Description

New short course built in collaboration with Oracle: Agent Memory: Building Memory-Aware Agents.
Many AI agents work well within a single session but lose everything when the session ends. In this course, you’ll learn how to build a memory-first architecture that allows agents to persist knowledge, retrieve relevant context, and update what they know across sessions.
Using Oracle AI Database, LangChain, and LLM-powered pipelines, you’ll build a complete agent memory system that turns a stateless agent into one that can learn and improve over time.
In this course, you’ll learn to:
- Design a memory-first architecture and understand why stateless agents struggle with long-horizon tasks
- Build persistent memory stores and implement a Memory Manager that orchestrates how agents read and write memory
- Scale tool use by treating tools as procedural memory and retrieving them with semantic search
- Implement memory extraction, consolidation, and write-back pipelines so agents can update what they know
- Assemble a fully stateful memory-aware agent that loads prior context and improves across sessions
The course is taught by Oracle’s Richmond Alake, Director of AI Developer Experience, and Nacho Martínez, Principal Data Science Advocate at Oracle.

Comments

Want to join the conversation?

Loading comments...