
Shift Left: Not a Magic Bullet with Liz Rice
Liz Rice argues that the shift‑left mantra, while still relevant, is no longer a silver bullet for software security. She notes that the buzz has moved toward supply‑chain transparency and SBOMs, but early‑stage testing alone cannot eliminate runtime risk. Rice emphasizes that static scanning only catches known vulnerabilities and cannot protect against zero‑day exploits. Consequently, organizations must pair shift‑left practices with robust runtime controls and continuous monitoring to address threats that emerge in production. A memorable line from the talk—“you can’t scan your way out of runtime risk”—captures the core message. She also points out that provenance and SBOMs, though valuable, do not guarantee safety without active defense mechanisms. The takeaway for businesses is clear: adopt a layered security strategy that spans development, supply‑chain verification, and real‑time protection. Relying solely on early testing invites gaps that attackers can exploit, making continuous runtime safeguards essential for modern threat landscapes.

Generative AI in the Real World: Chip Huyen on Finding Business Use Cases for Generative AI
In the inaugural episode of O'Reilly’s "Generative AI in the Real World," host Ben interviews Chip Huyen, founder of Claypot AI and author of Designing Machine Learning Systems, to explore how enterprises can discover practical generative‑AI use cases. Huyen stresses...

Box’s Strategic Pivot From Content to Context
Box is redefining its platform by shifting from a content‑centric model to a context‑driven architecture that empowers autonomous AI agents. The company argues that traditional workflows assume users start with zero context, but agents need just‑right, surgical context to act...

Startups Versus Incumbents
The video examines how incumbents and startups compete in deploying AI agents across business workflows, arguing that the balance of power hinges on data availability and task structure. Incumbents retain an edge when large volumes of workflow data already reside in...

AI Requires More Engineering Sophistication, Not Less
In his AI CodeCon talk, Box CEO Aaron Levie argues that AI‑generated code does not simplify engineering, but rather deepens technical demands. Engineers must still master the underlying trade‑offs of building scalable, deterministic or nondeterministic systems. The rise of AI...

Designing RL Environments for Model Training with Sharon Zhou
The video focuses on how enterprises can efficiently enhance large language models by designing reinforcement‑learning (RL) environments rather than attempting costly, in‑house post‑training. Sharon Zhou emphasizes that most companies lack the stable, GPU‑scale infrastructure needed for large‑scale fine‑tuning, and should...

In-Context Learning vs Supervised Fine-Tuning with Sharon Zhou
The discussion centers on the trade‑offs between in‑context learning—embedding examples directly in prompts—and supervised fine‑tuning, where a model is retrained on task‑specific data. In‑context prompting is quick to implement and can be cost‑effective when API calls are infrequent and the context...

The Rise of Agent-First Source Code with Addy Osmani and Tim O'Reilly
The video features Addy Osmani and Tim O'Reilly debating an emerging “agent‑first” paradigm, where software agents—not developers—become the primary consumers of source code. They argue that as AI agents grow more capable, code may be authored for machine readability first,...

Generative AI in the Real World: Sharon Zhou on Post-Training
The conversation centers on post‑training—techniques that adapt large language models after their initial pre‑training—to make them practical for enterprise use. Host Ben interviews Sharon Zhou, VP of AI at AMD, to unpack how these methods turn raw intelligence into usable,...

On the Wrong Side of the Bitter Lesson
Steve Yegge warns that future software will be judged by the "bitter lesson," a principle that favors brute‑force scaling over handcrafted intelligence. He argues that attempts to make AI or code inherently smarter place developers on the wrong side of...

Everyone’s Jeff Bezos Now
AI coding assistants now handle routine programming tasks, leaving developers to tackle more complex design challenges. Steve Yegge highlights this shift in a conversation with Tim O’Reilly, noting that AI solves the easy problems and pushes humans toward harder ones....

AI Coding: Balancing Speed & Quality with Addy Osmani
Addy Osmani discusses how AI‑generated pull requests are reshaping software teams, highlighting the tension between accelerated delivery and maintaining code quality. He notes that senior engineers increasingly feel swamped by a flood of AI‑written PRs they cannot fully comprehend. The core...

Technical Storytelling with Lena Reinhard and Priyanka Vergadia
Technical storytelling, speakers argue, transforms raw technical data into a compelling, decision-ready narrative by adding context, stakes and human impact—turning a ‘recipe’ of facts into a digestible ‘meal.’ Simply listing metrics or project status loses executives’ attention; effective stories link...

Identify, Scope, and Build an Agentic Workflow in N8n with Max Tkacz
The video walks viewers through building an AI‑driven, agentic workflow in n8n, starting with a live demo that automates a repetitive competitor‑monitoring task. Max Tkacz emphasizes a disciplined triage process—evaluating potential automations on time saved, feasibility, risk of damage, and...

How to Build Reliable AI at Scale: Insights From Addy Osmani
Addy Osmani, working to bridge Google DeepMind research with product and developer teams, urges builders to move beyond one-off demos toward production-ready AI systems. He frames development on a spectrum from “wild west” solo experiments to enterprise-grade setups with quality...