
SAM 3.1 Boosts Video Efficiency with Object Multiplexing
We’re releasing SAM 3.1: a drop-in update to SAM 3 that introduces object multiplexing to significantly improve video processing efficiency without sacrificing accuracy. We’re sharing this update with the community to help make high-performance applications feasible on smaller, more accessible hardware. 🔗 Model Checkpoint: https://go.meta.me/8dd321 🔗 Codebase: https://go.meta.me/b0a9fb

TRIBE V2 Predicts Brain Responses to Any Stimulus
Today we're introducing TRIBE v2 (Trimodal Brain Encoder), a foundation model trained to predict how the human brain responds to almost any sight or sound. Building on our Algonauts 2025 award-winning architecture, TRIBE v2 draws on 500+ hours of fMRI recordings...

Open‑Source Canopy Height Maps V2 Boosts Global Forest Detail
We’re announcing Canopy Height Maps v2 (CHMv2), an open source model for high-resolution global forest canopy mapping, developed in partnership with @worldresources. CHMv2 leverages our DINOv3 Sat-L vision model, specifically optimized for satellite imagery, to deliver substantial improvements in accuracy,...

Meta Accelerates Custom AI Silicon: Four Generations in Two Years
Custom silicon is critical to scaling next-gen AI. We’re detailing the evolution of the Meta Training and Inference Accelerator (MTIA), our homegrown silicon family designed to power the next era of AI experiences. Traditional chip cycles span years, but model...
Chief AI Officer Alex Deer Speaks at India AI Summit
Our Chief AI Officer @alexanddeer will take the stage at the India AI Impact Summit. Date: Thursday, February 19 Time: 1:53pm IST // 12:23am PST Watch the livestream here: https://www.youtube.com/live/WgW7cC-kHgY
SAM Accelerates Real-Time River Mapping for Flood Response
Our Segment Anything Models are helping advance flood monitoring and disaster response. See how the Universities Space Research Association (USRA) and U.S. Geological Survey (USGS) have fine-tuned SAM to automate a key bottleneck in real-time river mapping, enabling faster, scalable, and...
Open‑source PE‑AV Boosts Audio‑visual Separation Performance
We’re open-sourcing Perception Encoder Audiovisual (PE-AV), the technical engine that helps drive SAM Audio’s state-of-the-art audio separation. Built on our Perception Encoder model from earlier this year, PE-AV integrates audio with visual perception, achieving state-of-the-art results across a wide range of...
SAM Audio Isolates Any Sound From Mixtures via Prompts
🔉 Introducing SAM Audio, the first unified model that isolates any sound from complex audio mixtures using text, visual, or span prompts. We’re sharing SAM Audio with the community, along with a perception encoder model, benchmarks and research papers, to empower...

SAM 3D Enables Data‑Driven, Personalized Rehabilitation Insights
SAM 3D is helping advance the future of rehabilitation. See how researchers at @carnegiemellon are using SAM 3D to capture and analyze human movement in clinical settings, opening the doors to personalized, data-driven insights in the recovery process. 🔗 Learn more about...

SAM 3 Powers Precise Wildlife Tracking to Prevent Extinction
SAM 3’s ability to precisely detect and track objects is helping Conservation X Labs measure the survival of animal species around the world and prevent their extinction. 🔗 Learn more about the work: https://ai.meta.com/blog/segment-anything-conservation-x-wildlife-monitoring/?utm_source=threads&utm_medium=organic_social&utm_content=video&utm_campaign=sam

Explore Meta’s SAM 3 & 3D in New Playground
The Segment Anything Playground is a new way to interact with media. Experiment with Meta’s most advanced segmentation models, including SAM 3 + SAM 3D, and discover how these capabilities can transform your creative projects and technical workflows. 🔗 Try it...
ExecuTorch Speeds On‑device AI Across Meta Hardware
We’re advancing on-device AI w/ ExecuTorch, now deployed across devices including Meta Quest 3, Ray-Ban Meta, Oakley Meta Vanguard and Meta Ray-Ban Display. By eliminating conversion steps & supporting pre-deployment validation in PyTorch, ExecuTorch accelerates the path from research to production,...

SAM 3 Brings Unified Object Segmentation to Creators
Meet SAM 3, a unified model that enables detection, segmentation, and tracking of objects across images and videos. SAM 3 introduces some of our most highly requested features like text and exemplar prompts to segment all objects of a target...
Introducing SAM 3 and SAM 3D: Next‑Gen Segmentation
Today we’re excited to unveil a new generation of Segment Anything Models: 1️⃣ SAM 3 enables detecting, segmenting and tracking of objects across images and videos, now with short text phrases and exemplar prompts. 🔗 Learn more about SAM 3: https://go.meta.me/4de3d8 2️⃣...

Meta Launches ASR for 1,600 Languages, Including 500 New
Introducing Meta Omnilingual Automatic Speech Recognition (ASR), a suite of models providing ASR capabilities for over 1,600 languages, including 500 low-coverage languages never before served by any ASR system. While most ASR systems focus on a limited set of languages that...