AI Videos
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

AI Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
AIVideosNew Course: Semantic Caching for AI Agents
AI

New Course: Semantic Caching for AI Agents

•November 19, 2025
0
Andrew Ng
Andrew Ng•Nov 19, 2025

Why It Matters

Semantic caching can dramatically cut token costs and latency for AI‑driven applications, enabling businesses to deliver faster, cheaper services at scale.

Summary

The video announces a new online course on semantic caching for AI agents, developed in partnership with Redis and taught by Tyler Hutchinson and Elia Zescher. It positions semantic caching as a next‑generation technique that goes beyond exact‑match input‑output caching by reusing responses based on meaning, promising faster responses and lower token consumption.

The curriculum walks learners through building a semantic cache from the ground up, then re‑implementing it with Redis’s open‑source SDK. Key technical components include time‑to‑live policies for cache freshness, an open‑weight embedding model fine‑tuned for cache accuracy, and the use of similarity thresholds to balance hit rate, precision, recall, and latency.

Throughout the course, participants will measure performance metrics such as hit rate, precision, recall, and latency, observing how adjusting similarity thresholds impacts each. Real‑world examples illustrate how a semantic cache can accelerate complex AI agents while reducing operational costs.

The broader implication is that developers can deploy more responsive and cost‑effective AI applications, giving enterprises a competitive advantage in scaling AI services without proportional increases in compute spend.

Original Description

Learn more: https://bit.ly/44btwJY
Join our new short course, Semantic Caching for AI Agents! Learn from Tyler Hutcherson, Applied AI Engineering Lead, and Iliya Zhechev, Senior Research Engineer at Redis.
In this course, you'll build a semantic cache that makes your AI agents faster and more cost-effective by recognizing when different questions mean the same thing. For example, when someone asks "How do I get a refund?" and another asks "I want my money back," your cache will reuse the answer instead of making another API call, reducing the need for redundant model calls.
In detail, you'll learn to:
- Build your first semantic cache from scratch - Build a working cache to see how each component works, then implement it using Redis' open source tools.
- Measure cache effectiveness with key metrics - Track cache hit rate, precision, recall, and latency to understand your cache's real impact.
- Enhance cache accuracy with advanced techniques - Use threshold tuning, cross-encoders, LLM validation, and fuzzy matching to make your cache more effective.
- Build a fast AI agent with semantic caching - Integrate semantic caching into an AI agent that reuses results, skips redundant work, and gets faster over time.
Start building AI agents that respond faster and cost less to run.
Enroll now: https://bit.ly/44btwJY
0

Comments

Want to join the conversation?

Loading comments...