
The new Gemini Deep Think is achieving some truly incredible numbers on ARC-AGI-2. We certified these scores in the past few days. https://t.co/Q9qeJbCObK
You no longer need to leave Python to write high-performance hardware kernels. Learn how to use Pallas in Keras to author custom ops that lower to Mosaic for TPUs or Triton for GPUs: https://t.co/oeV4cmV4M0
One positive outcome of AI is that it will make the aging-related decline of your brain power less relevant. You can keep doing the same things just as nimbly at any age by offloading more.
I tend to view code as more of a liability than an asset. In this light, making it cheaper and faster to generate a lot of code might not be an unmitigated blessing.
GenAI will not replace human ingenuity. It will simply raise the floor for mediocrity so high that being "pretty good" becomes economically worthless.
Grandiose vagueposting on Twitter is the one tried and true marketing strategy for AI labs. But as it gets overused it eventually creates fatigue
The Transformer architecture is fundamentally a parallel processor of context, but reasoning is a sequential, iterative process. To solve complex problems, a model needs a "scratchpad" not just in its output CoT, but in its internal state. A differentiable way to...
The goal of AI should not be to replace human thought and human agency, but to expand them. Not everything needs to be automated.
LLMs represent the "library" phase of AI. The next phase will be the "scientist" phase. A library contains answers, but a scientist knows how to find answers that don't exist yet.
Evaluating the potential of LLMs to help with scientific discovery. In short: new ideas are direly needed to move AI towards invention. LLMs can be useful as brainstorming partners though. https://t.co/Zd0EKf8Z3n

New Keras release: 3.13 🎉 Some major new features: • Model export to LiteRT (formerly TFLite) for mobile/edge • GPTQ quantization support for post-training compression • New Adaptive Pooling layers for dynamic architectures https://t.co/Ogmag7FYCY
Innovation is a "strong-link problem". In a chain (weak-link problem), the weakest element breaks the system. In discovery (strong-link problem), the strongest element makes the breakthrough. The rest of the system provides the infrastructure that allows the outlier to function
Because our universe follows stable laws, a sufficiently general intelligent system adapted to it, like human-driven science, can eventually model any phenomenon within it. Human intelligence may not be "universal" in the mathematical sense (see No Free Lunch theorem), but we...
I would say there is no such thing as "universal" intelligence but there is definitely such a thing as "general" intelligence, and as a collective, we have it. "Science", modeled as an intelligent system (primarily powered by human intelligence) can solve...
You should measure human capability on a task not in terms of "average human" or "random human", but in terms of your best alternative (to AI) if you were to hire a human to solve the task. Which isn't average...
Looking forward to the ARC-AGI-3 numbers :)
AI will evolve from being an automation machine to becoming an invention machine. This will require a fundamentally new paradigm, with symbolic search as its core, not curve-fitting

Fluid intelligence as measured by ARC 1 & 2 is your ability to turn information into a model that will generalize. That's not the only thing you need to make an intelligent agent. To start with, when you're an agent in...
Back in 2019, ARC 1 had one goal: to focus the attention of AI researchers towards the biggest bottleneck on the way to generality, the ability to adapt to novelty on the fly, which was entirely missing from the legacy...
Cyril and the team at CTGT are productizing mechanistic interpretability. They make it possible to edit the behavior of LLMs to add safety policy guarantees without retraining, in a way that is much more reliable than simple prompting.
Congrats to the ARC Prize 2025 winners! The Grand Prize remains unclaimed, but nevertheless 2025 saw remarkable progress on LLM-driven refinement loops, both with "local" models and with commercial frontier models. We also saw the rise of zero-pretraining DL approaches like HRM...
The Keras community video meeting is happening today at 10am PT (in 1 hr 10 min). Join to get updates on the development roadmap and ask questions to the Keras team. URL in next tweet
Either you crack general intelligence -- the ability to efficiently acquire arbitrary skills on your own -- or you don't have AGI. A big pile of task-specific skills memorized from handcrafted/generated environments isn't AGI, not matter how big.
My prediction of Waymo covering >50% of the US by eoy 2028 is looking good
There's a specific threshold of complexity and self-direction below which a system degenerates, and above which it can open-endedly self-improve. Current AI systems aren't close to it yet. But it's inevitable we will reach this point eventually. When we do, we...
Waymo started testing with a safety driver in Dallas just 4 months ago. They're now fully driverless -- no one but you in the car. Waymo has been expanding at >500% per year.
To perfectly understand a phenomenon is to perfectly compress it, to have a model of it that cannot be made any simpler. If a DL model requires millions parameters to model something that can be described by a differential equation of...
Black Friday deal for Deep Learning with Python (3rd edition): 50% off, just today. Go buy it: https://t.co/EL58J1Zl22
https://t.co/XJNnjRCyYL
Gemini 3 scores 31.1% on ARC-AGI-2. Impressive progress.