
Stanford CS221 | Autumn 2025 | Lecture 13: Bayesian Networks and Gibbs Sampling
The lecture revisits Bayesian networks, emphasizing their construction—identifying variables, drawing directed graphs, and populating conditional probability tables (CPTs). It then shifts focus to probabilistic inference, contrasting exact tensor‑based computation with approximate sampling methods, and introduces Gibbs sampling as a faster alternative to rejection sampling. Key insights include the factorization of the joint distribution as the product of local CPTs, illustrated with the classic burglary‑earthquake‑alarm example where P(B|A=1) equals 0.51. The instructor shows how exact inference requires enumerating all assignments, which explodes exponentially, motivating approximate techniques. Rejection sampling is explained step‑by‑step, highlighting its simplicity but also its inefficiency when evidence is rare, as demonstrated by a 300‑sample run yielding a rough 0.44 estimate. The lecture then presents Gibbs sampling, a Markov chain Monte Carlo method that starts from a valid evidence‑consistent state and iteratively resamples each variable conditioned on the others. A telephone‑game network (A→B→C) illustrates how Gibbs sampling maintains evidence (C=1) while exploring the posterior over A, avoiding the costly rejections of the previous method. The discussion underscores the trade‑off: samples are correlated, yet the algorithm scales to high‑dimensional models. Implications are clear: Gibbs sampling enables scalable inference for large Bayesian networks common in AI, probabilistic programming, and decision‑support systems. Understanding conditional independence and appropriate sampling strategies equips practitioners to balance accuracy, computational cost, and convergence guarantees in real‑world applications.

Stanford CS221 | Autumn 2025 | Lecture 12: Bayesian Networks I
In Lecture 12 of Stanford’s CS221, Professor Koller pivots from the model‑free learning methods covered earlier to a model‑based framework, introducing Bayesian networks as a systematic way to represent and reason about uncertain worlds. He explains that a joint probability distribution...

Stanford CS221 | Autumn 2025 | Lecture 11: Games II
The lecture revisits two‑player zero‑sum games, reviewing the minimax principle and alpha‑beta pruning before introducing reinforcement‑learning techniques to learn game evaluation functions. Professor Ng explains why hand‑crafted heuristics, such as chess piece‑value tables, can be replaced by learned value networks. Key...

Stanford CS221 | Autumn 2025 | Lecture 10: Games I
The lecture introduces game theory as the next step after Markov decision processes and reinforcement learning, focusing on two‑player zero‑sum games. It defines a game formally with start states, player‑turn functions, and successor mappings, and emphasizes that utility is realized...

Stanford CS221 | Autumn 2025 | Lecture 9: Policy Gradient
The lecture revisits reinforcement learning fundamentals before shifting focus to policy‑based approaches that learn the policy itself rather than a value function. After reviewing Markov decision processes, Q‑learning, SARSA, and the role of exploration policies, the instructor frames the discussion...

Stanford CS221 | Autumn 2025 | Lecture 8: Reinforcement Learning
The lecture revisits Markov Decision Processes (MDPs) before launching into reinforcement learning (RL). It outlines the core components of an MDP—states, actions, transition probabilities, rewards, and discount factor—using the illustrative "flaky tram" example, and clarifies how a policy maps states...

Stanford CS221 | Autumn 2025 | Lecture 7: Markov Decision Processes
The lecture introduces Markov Decision Processes (MDPs) as the stochastic extension of deterministic search problems, positioning them as the foundation for reinforcement learning. After reviewing search’s start state, successors, costs, and end criteria, the professor highlights that real‑world decisions often...

Stanford CS221 | Autumn 2025 | Lecture 6: Search II
The lecture revisits search problems, introducing Uniform Cost Search (UCS) as an exact algorithm capable of handling cycles, and briefly foreshadows its relationship to A*. Key concepts include the distinction between past cost (minimum cost from start) and future cost (minimum...

Stanford CS221 | Autumn 2025 | Lecture 5: Search I
The lecture introduces search as a core reasoning tool that complements machine‑learning predictors. After reviewing the limits of reflexive mapping, the instructor explains why deterministic search remains vital, citing Rich Sutton’s “Bitter Lesson” that general, compute‑driven methods—search and learning—scale best. Key...

Stanford CS221 | Autumn 2025 | Lecture 4: Learning III
The lecture introduces deep learning fundamentals while guiding students from hand‑crafted computation graphs to the PyTorch ecosystem. After reviewing linear models, the professor emphasizes that modern frameworks like PyTorch and JAX handle forward evaluation, automatic differentiation, and graph management far...

Stanford CS221 | Autumn 2025 | Lecture 3: Learning II
The lecture introduces linear classification, extending the regression framework to predict discrete class labels. By representing inputs as vectors and applying a weighted sum plus bias, the model outputs a logit whose sign determines the predicted class, typically encoded as +1...

Stanford CS221 | Autumn 2025 | Lecture 2: Learning I
The lecture introduces tensors and the einops library, emphasizing how naming axes clarifies operations across any order. It then dives deep into the einsum function, showing how a single notation can express identity mapping, summations, element‑wise products, dot products, outer...

Stanford CS221 | Autumn 2025 | Lecture 1: Course Overview and AI Foundations
The opening lecture of Stanford’s CS221 course sets the stage by redefining artificial intelligence as a combination of perception, reasoning, action, and learning. Professor Percy Liang emphasizes that, despite rapid advances, the core foundations remain stable while the curriculum adapts...

Stanford AA228 Decision Making Under Uncertainty | Autumn 2025 | Offline Belief State Planning
The lecture introduced offline belief‑state planning for partially observable Markov decision processes, emphasizing that exact POMDP solvers quickly become intractable and motivating scalable approximations. Students were shown how the number of alpha vectors grows exponentially—e.g., a ten‑step horizon can generate...

Stanford Robotics Seminar ENGR319 | Winter 2026 | Bringing AI Up To Speed
The lecture framed autonomous driving as the ultimate test for artificial intelligence, contrasting it with games like chess that have already been mastered by AI. While chess operates in a closed, rule‑bound environment, driving unfolds in an open system where...