Stanford CS221 | Autumn 2025 | Lecture 5: Search I
Why It Matters
Understanding search fundamentals equips AI practitioners to integrate planning with large language models, unlocking more efficient and reliable solutions for complex tasks.
Key Takeaways
- •Search problems formalize planning beyond reflexive predictors in AI.
- •State representation must include all constraints for accurate search.
- •Successor function defines actions, costs, and resulting states.
- •Combining learned models with search yields powerful problem‑solving.
- •Exhaustive search guarantees optimality but scales poorly exponentially.
Summary
The lecture introduces search as a core reasoning tool that complements machine‑learning predictors. After reviewing the limits of reflexive mapping, the instructor explains why deterministic search remains vital, citing Rich Sutton’s “Bitter Lesson” that general, compute‑driven methods—search and learning—scale best.
Key concepts are formalized through the notion of a search problem: a start state, a successor function enumerating actions with associated costs, and an end test. Simple examples—walking versus taking a magic tram—illustrate how states can be enriched to capture tickets, last‑action flags, or other constraints, ensuring the algorithm respects problem rules.
A memorable quote from Sutton underscores the argument: “general methods that leverage computation are ultimately the most effective, and by a large margin.” The instructor uses this to justify revisiting symbolic search, especially when paired with modern learned models that can guide or prune the search space.
The practical implication is clear: precise modeling of states and successors enables generic algorithms—like exhaustive search or dynamic programming—to find optimal solutions, while coupling these algorithms with powerful learned heuristics promises scalable, high‑performance AI systems.
Comments
Want to join the conversation?
Loading comments...