Open-Source AI Isn’t Riskier; Both Need Shared Security
Weird how some people always target open-source in AI! First it was: “Open-source AI will destroy the world” (spoiler: it didn't and it won't) Now: “Open-source is a cybersecurity threat because of AI” Both narratives are far too simplistic. The truth is that the exact same risks exist in closed-source systems, often even more so. For example, in practice, APIs can create much bigger data and security vulnerabilities than open systems you can inspect, self-host, and secure yourself. And as with software more broadly, open-source often ends up more secure because it benefits from far more scrutiny than private internal systems. The reality is not “open vs closed.” The reality is that AI is raising cybersecurity stakes across the board, and we need to tackle that seriously together.
Open‑source Models Fail Due to Mismatched Agent Harnesses
Is there somewhere a collection of the best agent/coding harnesses for each models, especially open-source and local ones? In my opinion, the biggest reason why people are struggling with open/local models these days is that the agent/coding harnesses in most open...
Deploy GPU Kernels as Easily as Models
Introducing Kernels on the Hugging Face Hub ✨ What if shipping a GPU kernel was as easy as pushing a model? - Pre-compiled for your exact GPU, PyTorch & OS - Multiple kernel versions coexist in one process - torch.compile compatible - 1.7x–2.5x speedups over...

27K arXiv Papers OCR'd to Markdown in 29 Hours
We just OCR'd 27,000 arxiv papers into Markdown using an open 5B model, 16 parallel HF Jobs on L40S GPUs, and a mounted bucket. Total cost: $850 Total time: ~29 hours Jobs that crashed: 0 This now powers "Chat with your paper"...

Hugging Face Launches Kernels for AI Engineers
In a world where writing code to build websites and apps is trivial (thank you Lovable, Cursor, Claude,...), the real differentiation for you and your company (and what makes you successful) will be how you manage to train, run and...
Critical Open‑Source Projects Need Funding and Stronger Oversight
Feels like one of the cybersecurity risks over the coming months will be widely used open-source projects that are simply too lightly maintained for how critical they’ve become. A few ways to help: - fund open source more, and reward maintainers better -...
Tiny Open-Weight Models Replicate Anthropic's Vulnerability Detection
"But here is what we found when we tested: We took the specific vulnerabilities Anthropic showcases in their announcement, isolated the relevant code, and ran them through small, cheap, open-weights models. Those models recovered much of the same analysis. Eight...
Defenders Must Build Infrastructure Now; Models Ready, Ecosystem Lagging
"The priority for defenders is to start building now: the scaffolds, the pipelines, the maintainer relationships, the integration into development workflows. The models are ready. The question is whether the rest of the ecosystem is." https://t.co/z2GZ3SdDwW
What’s Your Current Spend on S3/R2 Storage?
Curious, how much are you all spending in S3/R2 or storage in general these days?
Deploy Gemma 4.26B via llama.cpp OpenAI API
llama-server -hf ggml-org/gemma-4-26b-a4b-it-GGUF:Q4_K_M openclaw onboard --non-interactive \ --auth-choice custom-api-key \ --custom-base-url "http://127.0.0.1:8080/v1" \ --custom-model-id "ggml-org-gemma-4-26b-a4b-gguf" \ --custom-api-key "llama.cpp" \ --secret-input-mode plaintext \ --custom-compatibility openai \ --accept-risk
Only Frontier Breakthroughs or Open-Source Attract Attention
If it’s not either pushing the frontier meaningfully or open-source, no one will care these days (which is why most orgs should release open-source to get some attention and developer mindshare)
Gemma 4 Runs Locally: Free, Safe, and Fast
this is Gemma 4 running locally on a 3 year old mac meaning: - free (=$0 no matter how much you use) - safe (you're not leaking all your data via unsafe APIs) - fast (as you can see)

Google Launches Apache‑licensed Gemma 4 for Local AI
So happy to see Google release Gemma 4 today in apache 2.0 that gives you frontier capabilities locally. You can use it right away in all your favorite open agent platforms like openclaw, opencode, pi, Hermes by asking ...

Git Fails for ML Data; Buckets Provide Mutable Storage
Hot take: Git was the wrong abstraction for 90% of ML data. Checkpoints, optimizer states, training logs, agent traces - none of this needs version control. It needs fast, cheap, mutable storage. So we built Buckets. S3-like storage on the @huggingface Hub...

TRL V1 Launches with 75+ Cutting‑Edge Training Methods
Today we’re releasing TRL v1. 75+ methods. SFT, DPO, GRPO, async RL to take advantage of the latest and greatest open-source. 6 years from first commit to the library that post-trains most open models in the world. Built to be future...