
AI Finds Vulns You Can't With Nicholas Carlini
In this episode, host Deirdre and David Amos sit down with vulnerability researcher Nicholas Carlini to discuss how large language models (LLMs) are now being used to discover software bugs, including zero‑day vulnerabilities. Carlini explains that recent advances allow a simple script to prompt an LLM to act like a fuzzer, generating inputs that trigger ASAN crashes and even finding serious issues such as a SQL injection in the Ghost CMS, complete with an exploit script. He outlines the methodology—using a crash oracle for memory‑corruption bugs and a critique‑agent pipeline for higher‑level flaws—and notes that the work leverages publicly available production models, not custom‑trained ones. The conversation highlights both the promise of AI‑driven security research and the need for careful human verification.

Python Cryptography Breaks Up with OpenSSL with Paul Kehrer and Alex Gaynor
In this episode, Alex Gaynor and Paul Kehrer discuss the Python cryptography library’s decision to move away from OpenSSL as its primary backend, citing long‑standing maintenance headaches and architectural constraints. They explain the technical challenges they faced with OpenSSL’s API...

The IACR Can't Decrypt with Matt Bernhard
The episode examines the IACR's botched Helios election, where a key management failure forced the organization to discard the vote and schedule a new election. Guest Matt Bernhard, an expert in secure voting systems, explains how Helios' homomorphic encryption works,...
