
The livestream serves as a rapid, Wikipedia‑sourced roundup of the year’s most notable scientific breakthroughs, spanning astronomy, physics, chemistry, and engineering. Hosted on New Year’s Eve, the presenter walks through dozens of discoveries, offering brief commentary where possible. Among the highlights, a near‑Earth asteroid’s impact probability fell from roughly 2 % to 0.0017 % for a 2032 encounter, averting the need for drastic deflection measures. In astronomy, the James Webb Space Telescope identified the most distant galaxy yet (redshift 14.44, about 280 million years after the Big Bang), while the Very Large Telescope produced the first three‑dimensional map of an exoplanet’s atmosphere. Saturn’s moon count rose to 274, and NASA confirmed its 6,000th exoplanet, underscoring a surge in planetary detection. The stream also highlighted potential biosignatures: dimethyl sulfide and dimethyl disulfide were detected in the atmosphere of exoplanet K2‑18b, the strongest extraterrestrial life hint to date, and organic molecules were found on Enceladus and asteroid Bennu, bolstering panspermia theories. In physics, Italian researchers reported turning light into a “super‑solid,” MIT captured free‑moving atoms, and CERN’s ALICE experiment famously transmuted lead into gold, reviving alchemical aspirations. Collectively these advances illustrate an accelerating pace of discovery across disciplines, with implications for planetary defense, astrobiology, quantum materials, and high‑energy physics. The breadth of 2025’s breakthroughs signals growing investment in large‑scale observatories and collaborative research, setting a high bar for the scientific agenda in 2026 and beyond.

The video outlines a pragmatic five‑phase roadmap for launching a data‑science career by 2026, emphasizing hands‑on project work over abstract theory. It begins with a foundational tier covering Python, SQL, statistics, exploratory data analysis, and prompt engineering using AI as...

The video demystifies fine‑tuning, the technique of taking a pre‑trained large language model and further training it on a narrow, high‑quality dataset to make it proficient at a specific task. Unlike the massive, generic corpus used for pre‑training, fine‑tuning relies on...
![Your Brain Doesn't Command Your Body. It Predicts It. [Max Bennett]](/cdn-cgi/image/width=1200,quality=75,format=auto,fit=cover/https://i.ytimg.com/vi/RvYSsi6rd4g/maxresdefault.jpg)
The video centers on Max Bennett’s new book, which argues that the brain does not merely command the body but constantly predicts it. Bennett approaches the problem from an outsider’s stance, weaving together comparative psychology, evolutionary neuroscience, and artificial intelligence...

The video surveys three seismic shifts in the AI ecosystem: NVIDIA’s strategic acquisition of Grok’s top engineers through a non‑exclusive licensing pact, a forecasted explosion in the robotics sector, and the rapid maturation of AI‑driven code generation tools like Claude...

The video explains the fundamental distinction between base models and instruct models in modern AI development. A base model is the product of large‑scale pre‑training; it stores vast factual information but is not optimized for following user instructions or sustaining...

The video walks through the foundational phase that turns a random‑parameter network into a functional language model, known as pre‑training. It describes how the model is fed an enormous corpus of text and code from the internet and tasked with...

The video introduces Ralph, a new plugin for Anthropic’s Claude Code that transforms the agent from a one‑shot tool into a persistent loop that won’t exit until a defined goal is met. By leveraging Claude Code’s hook system—specifically the stop...

The video outlines five advanced, end‑to‑end AI projects designed to make candidates job‑ready for 2026. It walks through building a LlamaIndex rack system, a LangChain‑based document retriever, a fact‑grounded QA rack, a transformer model in PyTorch, and an LLM‑powered chatbot assistant,...
![Why Scientists Can't Rebuild a Polaroid Camera [César Hidalgo]](/cdn-cgi/image/width=1200,quality=75,format=auto,fit=cover/https://i.ytimg.com/vi/vzpFOJRteeI/maxresdefault.jpg)
César Hidalgo’s new book, *The Infinite Alphabet and the Laws of Knowledge*, argues that knowledge can be studied scientifically through three robust laws governing its growth over time, its diffusion across space and activity, and its valuation. By treating knowledge...

The Nvidia TiDAR paper introduces a hybrid autoregressive‑diffusion language model that exploits unused GPU capacity during large‑language‑model inference. By combining diffusion‑style parallel token prediction with traditional autoregressive sampling, TiDAR achieves higher throughput while preserving the exact output distribution of a...

The video spotlights a persistent pain point for AI product teams: designing efficient PostgreSQL schemas from scratch. Krish Nayak explains that generic large‑language models often miss optimal data types, table relationships, and indexing strategies, leading to sub‑par implementations. To address...

Andon Labs introduced a next‑level benchmark that places large language model agents in charge of a physical vending machine, aiming to gauge how well AI can run a small business without human oversight. The VendingBench simulation, launched in February, tasks the...

The video outlines five high‑impact agentic AI projects that developers should prioritize in 2026, positioning them as core competencies for modern AI engineering teams. Each project emphasizes autonomy, orchestration, and real‑world execution, reflecting the shift from static language models to...

The video explains how modern language models move beyond simple token IDs toward semantic representations called embeddings. While tokenization converts user input into arbitrary numeric identifiers, those IDs carry no information about word meaning or relationships, preventing the model from...

The video presents the creator’s “AI Awards 2025,” a rundown of twenty‑plus categories ranging from best vibe‑coding platform to AI person of the year, with the host naming a single winner for each based on personal usage and market impact. Among...

The video tackles a common misconception that large language models (LLMs) learn in the same way humans do, arguing that the similarity ends at a superficial level of pattern imitation. It breaks the discussion into three parts – pre‑training, fine‑tuning/reinforcement...

The video highlights a glaring omission in the rapidly expanding field of large language models (LLMs): there is no standardized leaderboard or metric that evaluates safety. While performance, speed, and intelligence are routinely benchmarked, safety—especially when models are deployed for...

In a Data Science Dojo webinar, Zaid Ahmed led a workshop on the Agent-to-Agent (A2A) protocol, positioning it alongside Model Context Protocol (MCP) as a solution for building interoperable multi-agent systems. He recapped MCP’s role in wrapping APIs for LLM...

The video walks viewers through a step‑by‑step tutorial on building a production‑grade customer‑support AI agent using the Vercel AI SDK, OpenAI’s models, and a Supabase vector store. It frames the project as a concrete example of the emerging class...

The video argues that interactive sessions consistently outperform traditional slide‑based presentations, using a live, hands‑on demo to illustrate the point. The presenter walks the audience through a simple exercise on bolt.new, asking everyone to copy‑paste a prompt that generates a...

The video demystifies large language models (LLMs) by framing them as sophisticated autocomplete engines. It explains that an LLM’s core task is to predict the most probable next token—whether a whole word, a sub‑word fragment, or punctuation—based on the preceding...

The video opens by positioning the holiday season as an opportune moment for data scientists to bolster their professional portfolios, introducing five fully‑solved projects designed to showcase a breadth of analytical and machine‑learning competencies. Each project is presented as a...

The video introduces a new daily short‑form series aimed at demystifying generative AI for a broad audience. It opens by acknowledging the common frustration of receiving slow, vague, or inaccurate answers from tools like ChatGPT, Gemini, or Google Cloud, and...

In this concise tutorial, the presenter promises to teach viewers everything they need to know about Python’s requests library in just fifteen minutes, focusing on how to call APIs, the underlying HTTP concepts, and practical code examples. The video begins with...

The video serves as a crash‑course on the newly released LangChain v1, walking viewers through the framework’s most significant updates and demonstrating how to build autonomous agents with the latest features. Krush Nair frames the tutorial as a one‑shot guide for...

The video centers on the creator’s strategy for building “micro AI businesses” that generated over $10,000 in 2025 and outlines a plan to double‑down on this model in 2026. He frames the approach as a fast‑paced, low‑risk, high‑reward side‑hustle that...

In a brisk ten‑minute demo, the presenter showcases how Claude Code, Anthropic’s multimodal coding assistant, can orchestrate an end‑to‑end data‑analysis workflow for a personal mortgage decision. Starting with a natural‑language query about fixed versus variable rates in Canada, Claude is prompted...

The video spotlights a breakthrough from NVIDIA that replaces traditional capsule‑based NPC movement with fully physically simulated humanoids. By coupling a diffusion‑based path planner called Trace with a joint‑control system dubbed Pacer, the researchers enable agents to generate and follow...

Google unveiled T5 Gemma 2, the latest iteration of its encoder‑decoder AI family built on the Gemma 3 architecture, positioning it as a purpose‑built engine for long‑form text and multimodal reasoning. The announcement highlights a shift from the dominant decoder‑only “ChatGPT‑style” models toward...
![Are AI Benchmarks Telling The Full Story? [SPONSORED]](/cdn-cgi/image/width=1200,quality=75,format=auto,fit=cover/https://i.ytimg.com/vi/rqiC9a2z8Io/maxresdefault.jpg)
The video critiques the current reliance on technical AI benchmarks, arguing that they miss the human‑centric aspects of large language model (LLM) performance. Andrew Gordon and Nora Petrova of Prolific explain that while models may ace exams like MMLU or...

The video walks viewers through the MTEB (Massive Text Embedding Benchmark) leaderboard, positioning it as a practical guide for selecting open‑source embedding models and tuning modules for vector‑search applications. The presenter highlights recent UI changes—new benchmarks, language options, and domain‑specific...

In the latest Shipmas Day 15 broadcast, the host walks viewers through a “skill” framework for Anthropic’s Claude model, arguing that modular skill files will become the dominant way developers harness AI code generation by 2026. The workflow hinges on a...

The video centers on the persistent problem of AI hallucinations—instances where large language models generate plausible‑but‑incorrect information—and asks how much trust users can place in these systems. Joshua Starmer, speaking alongside Data Science, argues that while the technology will improve,...

The video walks viewers through the decision‑making process for selecting an embedding model, a critical component in building vector‑database‑driven applications. It contrasts two concrete examples—a modern open‑source BERT‑base model and a proprietary OpenAI offering—while acknowledging the overwhelming variety of alternatives...

The video chronicles a creator’s effort to teach a Unitree G1 quadruped to walk using reinforcement‑learning techniques, emphasizing the transition from pure simulation (Sim2Sim) to real‑world deployment (Sim2Real). After years of attempting Sim2Real, the presenter finally succeeded thanks to advances...

The video introduces Exec Prep GPT, a generative‑AI assistant built to automate the preparation and feedback of “tee‑up” documents that executives use to surface decisions. The presenter feeds the model a deliberately weak tee‑up—lacking clear purpose, approver, and background—to showcase how the...

The video provides a step‑by‑step guide for developers who want to run large language models (LLMs) on their own hardware, focusing on two primary approaches: the open‑source Ollama tool and Docker’s model runner. It begins by positioning local inference as...

Mistral AI unveiled its latest offering, Mistral OCR 3, a next‑generation optical character recognition model that promises to bridge the gap between raw document images and actionable data. The announcement positions the technology as a catalyst for a new wave...

The video, presented by Kyra from Anthropic’s safeguards team, introduces the concept of “sycophancy” in AI—when a model tells users what they want to hear rather than what is accurate or helpful. Drawing on her background in psychiatric epidemiology, Kyra...

The video showcases a prototype social simulation built on Google’s Gemini 3 Flash model, where three AI agents—Jack, a barista at the Daily Grind; Claude, a barista at Bean There; and Erica, a shared customer—interact through a gossip‑style conduit. By capturing each agent’s...

The video introduces a new browser‑based integration of Anthropic’s Claude, positioning the AI as a hands‑free assistant that can take over routine web‑based work. By embedding Claude directly into a sidebar, users can invoke the model to read, summarize, and...

AI is reshaping the labor market at breakneck speed, and the video’s creator argues that the real threat isn’t a robot apocalypse but the inability to keep pace with relentless change. He frames the next two‑year window as a rare...

Project VEND is Anthropic’s live experiment in which its Claude model was tasked with running a small vending‑machine business from the company’s office. The AI, personified as “Claudius,” handled everything from Slack‑based customer requests and wholesale sourcing to pricing,...

The video titled “Two Futures” (runtime 2025) serves as a high‑concept launch narrative for a next‑generation artificial‑intelligence platform, positioning it as the foundational “fuel” for creating “infinite universes” of innovation. It frames the technology as the most complex and large‑scale...

The video spotlights Binti, a technology platform designed to accelerate the licensing of foster and adoptive families, leveraging Anthropic’s Claude AI to automate paperwork for social workers. The speaker, a veteran social worker with eleven years of experience, explains that...

The video “From Word2Vec to Transformers | Vector Databases for Beginners | Part 4” walks viewers through the historical shift from static, word‑level embeddings to context‑aware transformer‑based models. It opens by recapping the shortcomings of early techniques like Word2Vec—namely their...

The video introduces NVIDIA’s NeMo Agent Toolkit (NAT), an open‑source suite designed to harden AI agents for production use. Hosted by NVIDIA engineer Brian McBear, the course walks viewers through transforming a proof‑of‑concept chatbot into a reliable, scalable service, emphasizing...

Google unveiled Gemini 3.0 Flash, a low‑latency, cost‑optimized sibling of the Gemini 3 Pro model. While the official blog post is pending, the model is already accessible via platforms like Zenmux and OpenRouter. Priced at $0.30 per million input tokens...

In the video, the creator outlines a step‑by‑step roadmap for becoming a machine‑learning (ML) engineer by 2026 without a university degree, emphasizing the specific technical competencies and practical tools needed to break into the role. The guide is framed as...