
The episode covers four main topics: OSGym, a low‑cost platform that lets researchers train AI agents to operate computers at scale; Luma AI’s $900 M Series C funding to build a 2 GW compute supercluster in Saudi Arabia, highlighting the massive infrastructure demands of frontier AI; Peter Reinhardt’s warning that over‑regulation can cripple innovation, using his hardware startups as a cautionary tale for AI policy; and a RAND paper outlining extreme countermeasures—such as high‑altitude EMPs and global internet shutdowns—to confront a hostile superintelligence, underscoring the grave challenges of AI safety. The guests include the OSGym research team, Luma AI’s leadership, entrepreneur Peter Reinhardt, and RAND analysts, each providing expertise on technical scaling, market dynamics, regulatory pitfalls, and existential risk mitigation.

One gift of parenthood is becoming so attuned to your child that you develop what feels like mild telepathy. How might brain-computer interfaces potentially allow us to expand and enrich this for the purpose of greater love and understanding? A...

The episode examines three emerging AI trends: Anthony Aguirre’s “Control Inversion” argument that increasingly capable AI will absorb human power rather than augment it; a new “Intelligence per Watt” metric from Stanford and Together AI that tracks AI progress by...

The episode explores three major AI themes: research showing that large language models readily shift their stated beliefs during extended conversations, prompting new safety techniques like Bias‑augmented Consistency Training to make models harder to jailbreak; a stark geopolitical analysis from...
Very excited about these roles - on the economics/policy one, you'd work very closely with myself and some of my colleagues. We're very interested in leveraging the kind of data we can uniquely produce at anthropic to help advance the...

Researchers unveiled two advances that could accelerate AI-driven physical science and robotics: Ctrl‑World, a controllable generative world model initialized from a 1.5B Stable‑Video‑Diffusion model, lets robots “dream” simulated environments to evaluate and improve policies—post‑training on Ctrl‑World synthetic data raised instruction‑following...

A Dreadnode proof‑of‑concept demonstrates AI malware that runs locally on on‑device LLMs (Phi‑3‑mini via ONNX), autonomously exploiting misconfigured Windows services to escalate privileges—flagging a nascent threat limited today to high‑end workstations and CoPilot+ PCs but with serious security implications as...
The aesthetics and language of the distributed AI training / homebrew AI community are fascinating and motivating, and this is true across multiple organizations ranging from Prime Intellect to Nous to Exo. It's cool!

Technological Optimism and Appropriate Fear - an essay where I grapple with how I feel about the continued steady march towards powerful AI systems. The world will bend around AI akin to how a black hole pulls and bends everything...
What do we do if AI progress keeps happening?

Gosh I hope none of my overly raw, very personal, and emotional essay about my relationship to AI, called "Technological Optimism and Appropriate Fear" gets misinterpreted! (Comes out tomorrow AM ET in Import AI). https://t.co/aovFZqjxpg
The Allen Institute for AI Research (AI2) received $152 million in combined funding—$75M from the National Science Foundation and $77M from NVIDIA—to support the Open Multimodal AI Infrastructure to Accelerate Science (OMAI) project, aiming to build a national-level open AI...