AI Videos
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

AI Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
AIVideosDid You Miss These 2 AI Stories? A *Real* LLM-Crafted Breakthrough + Continual Learning Blocked?
AI

Did You Miss These 2 AI Stories? A *Real* LLM-Crafted Breakthrough + Continual Learning Blocked?

•October 22, 2025
0
AI Explained
AI Explained•Oct 22, 2025

Why It Matters

The drug-discovery result demonstrates that modestly sized LLMs can produce testable biological breakthroughs, potentially accelerating biomedical R&D, while the AGI scoring framework reframes how progress is measured and could influence investment, regulation, and research focus.

Summary

A 27-billion-parameter LLM called C2S-scale—built on older Gemma 2 architecture and fine-tuned to predict cellular responses—generated a novel drug candidate that amplified interferon effects and converted ‘cold’ tumors to ‘hot,’ with in vitro lab validation. The video argues that while major AI firms are currently allocating compute toward product features and monetization, meaningful frontier advances continue: Google’s Gemini 3 is imminent and models like GPT-5 and Gemini 2.5 show competitive performance on hard benchmarks. A separate paper applies a cognitive-capacity framework to quantify AGI, scoring GPT-4 at ~27% and GPT-5 at ~58%, sparking debate about what milestones still matter and how to measure progress. The host connects these developments to strategic choices by leading labs and hints at implications for continual learning and research priorities.

Original Description

While compute-spend focuses on cash over cleverness, it can seem like the AI IQ plateau will last forever. But here’s a breakthrough for science, led by an LLM. And not even the latest or greatest one! I’ll go over CS2-Scale, then the Definition of AGI paper, a key continual learning quote, and Sora 2 answering Math questions…
https://assemblyai.com/aiexplained
AI Insiders ($9!): https://www.patreon.com/AIExplained
Chapters:
00:00 - Introduction
00:55 - C2S
04:48 - OpenAI not too far behind (Simple, Codex)
06:37 - A Definition of AGI?
11:10 - OpenAI Researcher on Continual Learning Problems
13:02 - Sora 2 can answer math qs
C2S Release Notes: https://blog.google/technology/ai/google-gemma-ai-cancer-therapy-discovery/
Paper: https://www.biorxiv.org/content/10.1101/2025.04.14.648850v2.full.pdf
AGI definition: https://www.agidefinition.ai/paper.pdf
Jerry Tworek Interview: https://www.youtube.com/watch?v=RqWIvvv3SnQ
https://simple-bench.com/
DeepThink Record: https://x.com/EpochAIResearch/status/1976340039178305924
Sora 2 for Q&A: https://x.com/epochairesearch/status/1974172794012459296
Hell in a Cell: https://x.com/boneGPT/status/1978614451223003567
GPT-5 Breakthrough?: https://x.com/demishassabis/status/1979417877590774063
Quantum: https://x.com/sundarpichai/status/1981013746698100811
Non-hype Newsletter: https://signaltonoise.beehiiv.com/
Podcast: https://aiexplainedopodcast.buzzsprout.com/
0

Comments

Want to join the conversation?

Loading comments...