
Exploring AI's Societal Impact at SF Book Event
Join me this Wednesday in SF for an event celebrating the new book from NPR's Planet Money team. We'll talk about the impact of AI on society, how we think about the future at Anthropic, and maybe read some of my Import AI writing. More info: https://t.co/NiZBnk9bTM https://t.co/TAhgX3Z8PO
Hiring: Communications Lead & Operations Wizard Needed
We're hiring for a couple of important roles: 1) Communications lead: Seeking excellent writers with big ideas. Talk to me or @maxwellcyoung . 2) An operational wizard to scale the Policy and TAI orgs, working closely with me and Sarah Heck to...
Switching Roles to Spotlight Growing Risks of Powerful AI
AI progress continues to accelerate and the stakes are getting higher, so I’ve changed my role at @AnthropicAI to spend more time creating information for the world about the challenges of powerful AI.
AI Job Futures Mirror Early Deep Learning Uncertainty
Figuring out what the trends will be for AI and employment feels like figuring out how deep learning might influence computer vision in ~2010 - clearly, something significant will happen, but there is very little data out of which you...

Import AI 446: Nuclear LLMs; China's Big AI Benchmark; Measurement and AI Policy
This episode explores how measurement drives AI governance, highlighting Jacob Steinhardt's argument that better metrics can lower policy compliance costs and shape incentives, much like CO₂ monitoring or COVID testing. It then examines a study where three leading LLMs (Claude...

Language Models Adopt Distinct Strategies in Simulated Nuclear Crises
Choose your fighter. From a paper I'm writing up for Import AI this week about the behavior of language models in a simulated nuclear crises. https://t.co/pwXdiITuYX
Anthropic Expands Societal Impacts Team Amid Growing Model Influence
We’re aggressively scaling up the Societal Impacts (SI) team at Anthropic as our models are beginning to have non-trivial impacts on the world.
Import AI 437: Co-Improving AI; RL Dreams; AI Labels Might Be Annoying
Jack Clark discusses three timely AI topics: Facebook’s proposal for "co‑improving" AI, which advocates collaborative human‑AI research cycles to achieve safer superintelligence; the hidden costs and complexities of AI labeling policies, illustrated by EU compliance burdens that could hinder effective...

Import AI 436: Another 2GW Datacenter; Why Regulation Is Scary; How to Fight a Superintelligence
The episode covers four main topics: OSGym, a low‑cost platform that lets researchers train AI agents to operate computers at scale; Luma AI’s $900 M Series C funding to build a 2 GW compute supercluster in Saudi Arabia, highlighting the massive infrastructure demands...

BCIs Could Amplify Parent-Child Telepathy for Deeper Connection
One gift of parenthood is becoming so attuned to your child that you develop what feels like mild telepathy. How might brain-computer interfaces potentially allow us to expand and enrich this for the purpose of greater love and understanding? A...

Import AI 435: 100k Training Runs; AI Systems Absorb Human Power; Intelligence per Watt
The episode examines three emerging AI trends: Anthony Aguirre’s “Control Inversion” argument that increasingly capable AI will absorb human power rather than augment it; a new “Intelligence per Watt” metric from Stanford and Together AI that tracks AI progress by...

Import AI 434: Pragmatic AI Personhood; SPACE COMPUTERS; and Global Government or Human Extinction;
The episode explores three major AI themes: research showing that large language models readily shift their stated beliefs during extended conversations, prompting new safety techniques like Bias‑augmented Consistency Training to make models harder to jailbreak; a stark geopolitical analysis from...
Join Us to Shape AI‑Economy Policy with Unique Data
Very excited about these roles - on the economics/policy one, you'd work very closely with myself and some of my colleagues. We're very interested in leveraging the kind of data we can uniquely produce at anthropic to help advance the...

Import AI 433: AI Auditors; Robot Dreams; and Software for Helping an AI Run a Lab
Researchers unveiled two advances that could accelerate AI-driven physical science and robotics: Ctrl‑World, a controllable generative world model initialized from a 1.5B Stable‑Video‑Diffusion model, lets robots “dream” simulated environments to evaluate and improve policies—post‑training on Ctrl‑World synthetic data raised instruction‑following...

Import AI 432: AI Malware; Frankencomputing; and Poolside's Big Cluster
A Dreadnode proof‑of‑concept demonstrates AI malware that runs locally on on‑device LLMs (Phi‑3‑mini via ONNX), autonomously exploiting misconfigured Windows services to escalate privileges—flagging a nascent threat limited today to high‑end workstations and CoPilot+ PCs but with serious security implications as...