Guardrails with LangChain: A Complete Crash Course for Building Safe AI Agents

Krish Naik
Krish NaikMar 5, 2026

Why It Matters

Implementing Guardrails with LangChain lets businesses deploy AI agents that are both secure and cost‑efficient, protecting data privacy and regulatory compliance while avoiding costly model misuse.

Key Takeaways

  • Guardrails enforce safe inputs, actions, and compliant outputs for AI agents.
  • Two guardrail strategies: deterministic rule‑based and model‑based LLM checks.
  • LangChain middleware offers built‑in PII detection and human‑in‑the‑loop hooks.
  • Pre‑agent and post‑agent hooks can block unsafe requests at zero LLM cost.
  • Layered guardrails combine multiple techniques for robust, cost‑effective safety.

Summary

The video introduces Guardrails as essential safety mechanisms that sit around an AI agent’s pipeline, ensuring only safe inputs are processed, approved actions are taken, and compliant outputs are returned. Using LangChain’s middleware framework, the presenter explains how developers can embed these controls directly into agent workflows. Key insights include two primary implementation approaches: a deterministic, rule‑based method that relies on keyword matching with zero LLM cost, and a model‑based approach that leverages an LLM to semantically evaluate content, albeit at higher expense. LangChain provides built‑in middleware for PII redaction, human‑in‑the‑loop approvals, and hooks that run before or after LLM calls, allowing flexible, cost‑effective guardrail placement. The speaker demonstrates practical code: a deterministic function blocks queries containing terms like "hack" or "malware," while a model‑based guardrail prompts a small LLM to label inputs as safe or unsafe. Additional examples showcase PII detection that masks emails and credit‑card numbers, and post‑agent hooks that can mutate unsafe output before it reaches the user. These techniques enable enterprises to meet regulatory requirements, prevent prompt injection attacks, and control operational costs. By layering deterministic checks with selective LLM validation, organizations can build robust, compliant AI applications without incurring unnecessary inference fees.

Original Description

In this crash course, we will cover everything you need to know about implementing guardrails in LangChain agents -- from simple keyword filters to production-grade layered middleware stacks. By the end, you will have built a fully guarded healthcare chatbot with PII detection, content filtering, human-in-the-loop approval, and output safety validation.
Here is what we will cover:
What are Guardrails and why do they matter?
Two approaches: Deterministic vs Model-based
Built-in: PII Detection Middleware
Built-in: Human-in-the-Loop Middleware
Custom: Before-Agent Guardrail (input filtering)
Custom: After-Agent Guardrail (output safety)
Layered / Combined Guardrails
Real-World Use Case: Healthcare Chatbot
Visit krishnaik.in for Live courses

Comments

Want to join the conversation?

Loading comments...