Did You Miss These 2 AI Stories? A *Real* LLM-Crafted Breakthrough + Continual Learning Blocked?

AI Explained
AI ExplainedOct 22, 2025

Why It Matters

The drug-discovery result demonstrates that modestly sized LLMs can produce testable biological breakthroughs, potentially accelerating biomedical R&D, while the AGI scoring framework reframes how progress is measured and could influence investment, regulation, and research focus.

Summary

A 27-billion-parameter LLM called C2S-scale—built on older Gemma 2 architecture and fine-tuned to predict cellular responses—generated a novel drug candidate that amplified interferon effects and converted ‘cold’ tumors to ‘hot,’ with in vitro lab validation. The video argues that while major AI firms are currently allocating compute toward product features and monetization, meaningful frontier advances continue: Google’s Gemini 3 is imminent and models like GPT-5 and Gemini 2.5 show competitive performance on hard benchmarks. A separate paper applies a cognitive-capacity framework to quantify AGI, scoring GPT-4 at ~27% and GPT-5 at ~58%, sparking debate about what milestones still matter and how to measure progress. The host connects these developments to strategic choices by leading labs and hints at implications for continual learning and research priorities.

Original Description

While compute-spend focuses on cash over cleverness, it can seem like the AI IQ plateau will last forever. But here’s a breakthrough for science, led by an LLM. And not even the latest or greatest one! I’ll go over CS2-Scale, then the Definition of AGI paper, a key continual learning quote, and Sora 2 answering Math questions…
Chapters:
00:00 - Introduction
00:55 - C2S
04:48 - OpenAI not too far behind (Simple, Codex)
06:37 - A Definition of AGI?
11:10 - OpenAI Researcher on Continual Learning Problems
13:02 - Sora 2 can answer math qs

Comments

Want to join the conversation?

Loading comments...