The drug-discovery result demonstrates that modestly sized LLMs can produce testable biological breakthroughs, potentially accelerating biomedical R&D, while the AGI scoring framework reframes how progress is measured and could influence investment, regulation, and research focus.
A 27-billion-parameter LLM called C2S-scale—built on older Gemma 2 architecture and fine-tuned to predict cellular responses—generated a novel drug candidate that amplified interferon effects and converted ‘cold’ tumors to ‘hot,’ with in vitro lab validation. The video argues that while major AI firms are currently allocating compute toward product features and monetization, meaningful frontier advances continue: Google’s Gemini 3 is imminent and models like GPT-5 and Gemini 2.5 show competitive performance on hard benchmarks. A separate paper applies a cognitive-capacity framework to quantify AGI, scoring GPT-4 at ~27% and GPT-5 at ~58%, sparking debate about what milestones still matter and how to measure progress. The host connects these developments to strategic choices by leading labs and hints at implications for continual learning and research priorities.
Comments
Want to join the conversation?
Loading comments...