This is a big. Train LLMs ~3× faster with no accuracy tradeoff. @UnslothAI just rolled out a strong set of kernel + data pipeline upgrades that remove real training bottlenecks: padding overhead, unfused ops, and long-context indexing pain. Result: models like Qwen3-4B now train on ~3GB VRAM, at roughly 3× the speed. What changed under the hood: → Fused QK RoPE Triton kernel: ~2.3× faster rotary embeddings, fully compatible with packing → Updated SwiGLU / GeGLU kernels: Int64-safe indexing, long-context training no longer breaks → Clean sample packing: 2.5×–5× higher throughput across FA3, xFormers, and SDPA, identical loss curves → Padding-free batching by default: ~2.1× faster training, ~50% less VRAM, zero accuracy loss Works across SFT, full fine-tuning, and pretraining. This is performance engineering done right: remove waste instead of piling on complexity. Links to announcement + demo below 🧵↓
This guy literally turned Claude into an Ultrathink powerhouse 🤯 https://t.co/vjHOFs32jT
Wow. @AnthropicAI just released a free library with 50+ real Claude use cases 🤯 Concrete examples across: → research → writing → coding → analysis → everyday workflows Ace reference if you want to see how Claude is actually used in practice. Link in 🧵 ↓ https://t.co/q83IC0H3m2
bro literally built an army of AI Agents in @n8n_io with free Kimi K2 LLM 🤯 https://t.co/aAzL8cGZrN
Gemini, OpenAI, Cursor… everyone’s cooking like crazy. Keep this pace and I’ll be at Xmas dinner like: https://t.co/z7YZV7vTlu
Major new research from Google and MIT. "More agents is all you need" has become a mantra for AI developers. We know multi-agent systems can be effective, but we do this mostly based on heuristics. ... but this study (180 configs across OpenAI/Google/Anthropic...
Better than Lovable. Bold, I know. But stick with me. @Anything’s latest iteration builds full apps from scratch, fixes every issue along the way, and ships to the App Store from one prompt 🔥 Say hello to Anything Max. (oh and there’s a $100K hackathon...
💡 Tip → GPT-5.2 Model is already out in Cursor You're welcome. https://t.co/olpG8Za6c4
[🚨 Gar-Leak Alert] @OpenAI just dropped a not-so-subtle nod to “Garlic”, the codename many believe ties to their next model. GPT-5.2 is expected tomorrow 👀 https://t.co/1ELYioSmih

Here’s a goldmine of LLM notebooks and it goes way beyond basic fine-tuning 🤯 Auto evals, lazy merges, Franken MoEs, uncensoring techniques… real wizard-level tooling. Built by @maximelabonne and @iusztinpaul, authors of the excellent LLM Engineer’s Handbook. Repo in 🧵↓ https://t.co/VKT8uJpQwe
4D chess move by @AnthropicAI to sponsor Claude ads on stacktraces that get no results in Google 🤯 https://t.co/0Vp3WkqTgE

this guy literally put in 1000 hours of prompt engineering to nail down the 6 patterns that actually matter. https://t.co/0hMWaNdgMx
Figma finally meets ChatGPT. An AI workspace that treats every doc as a movable, editable visual block, not a scrolling chat. @Felo_ai_en is a Figma-style Agent Workspace for documents, built for real collaboration and integrated AI work 🔥 5 wild use cases 🧵↓...
Airwallex is building AI agents that are about to supercharge how financial workflows get done. Stripe once tried buying them for $1.2B... they refused. Today they’re at $1Bn ARR and expanding globally at a crazy pace 🔥 Quick dive 🧵↓ https://t.co/QeeaVR6G4S
.@ResembleAI just dropped something HUGE today... and it might be the most advanced deepfake security tech I’ve seen this year. Meet DETECT-3B Omni. A single model that can detect fake voices, images, and videos, all in one unified system. Let's dive in 🧵↓...
Still can’t believe @karpathy released this 3.5-hour free deep dive on how ChatGPT actually works for free. If there’s one AI video to watch in 2025, this is the one https://t.co/dpI7BA4HUe
Just opened ChatGPT and got this new option where you can now tweak its personality and tone 👀 https://t.co/SaSadbsxVH
AGI might be closer than we think. Google just dropped Titans + MIRAS, a long-term memory system for AI that updates itself in real time. It's a new architecture that combines the speed of RNNs with the performance of Transformers. ... and It’s...
Well, well, well! According to The Verge, @OpenAI could drop GPT-5.2 “Code Red” as soon as December 9. https://t.co/pd6oAuzQyL
The Intelligence market is inevitable. I mean, once cognition becomes interchangeable, you move from model APIs to task-based markets. Clear specs → multiple suppliers can meet them → the system routes the optimal intelligence. @The_GridAI’s Manifesto nails it 👌 ↓
Gemini 3 just launched, and @Browserbase's already run full computer-use evaluations to see how well it handles a real browser. Clicking, searching, filling forms: they tested it with real browsing tasks 🤘 Here’s how Gemini 3 stacks up against Claude, GPT-5, and...
This one’s a gem. A Free 80-page prompt engineering guide is surprising deep, covering: → CoT → Eval methods → RAG → Agents → Prompt hacking → Multimodal prompts ... and more! Link to the guide in 🧵 ↓ https://t.co/I0RPII8u6y
I mean, do these guys ever SLEEP? Right after Kling O1, we already get @Kling_ai 2.6, their first audio-enabled model 🔥 It generates natural group dialogue with synced timing, reactions, and ambience. I just dropped in text or an image → it handled...