Our Chief AI Officer @alexanddeer will take the stage at the India AI Impact Summit. Date: Thursday, February 19 Time: 1:53pm IST // 12:23am PST Watch the livestream here: https://www.youtube.com/live/WgW7cC-kHgY
Our Segment Anything Models are helping advance flood monitoring and disaster response. See how the Universities Space Research Association (USRA) and U.S. Geological Survey (USGS) have fine-tuned SAM to automate a key bottleneck in real-time river mapping, enabling faster, scalable, and...
We’re open-sourcing Perception Encoder Audiovisual (PE-AV), the technical engine that helps drive SAM Audio’s state-of-the-art audio separation. Built on our Perception Encoder model from earlier this year, PE-AV integrates audio with visual perception, achieving state-of-the-art results across a wide range of...
🔉 Introducing SAM Audio, the first unified model that isolates any sound from complex audio mixtures using text, visual, or span prompts. We’re sharing SAM Audio with the community, along with a perception encoder model, benchmarks and research papers, to empower...

SAM 3D is helping advance the future of rehabilitation. See how researchers at @carnegiemellon are using SAM 3D to capture and analyze human movement in clinical settings, opening the doors to personalized, data-driven insights in the recovery process. 🔗 Learn more about...

SAM 3’s ability to precisely detect and track objects is helping Conservation X Labs measure the survival of animal species around the world and prevent their extinction. 🔗 Learn more about the work: https://ai.meta.com/blog/segment-anything-conservation-x-wildlife-monitoring/?utm_source=threads&utm_medium=organic_social&utm_content=video&utm_campaign=sam

The Segment Anything Playground is a new way to interact with media. Experiment with Meta’s most advanced segmentation models, including SAM 3 + SAM 3D, and discover how these capabilities can transform your creative projects and technical workflows. 🔗 Try it...
We’re advancing on-device AI w/ ExecuTorch, now deployed across devices including Meta Quest 3, Ray-Ban Meta, Oakley Meta Vanguard and Meta Ray-Ban Display. By eliminating conversion steps & supporting pre-deployment validation in PyTorch, ExecuTorch accelerates the path from research to production,...

Meet SAM 3, a unified model that enables detection, segmentation, and tracking of objects across images and videos. SAM 3 introduces some of our most highly requested features like text and exemplar prompts to segment all objects of a target...
Today we’re excited to unveil a new generation of Segment Anything Models: 1️⃣ SAM 3 enables detecting, segmenting and tracking of objects across images and videos, now with short text phrases and exemplar prompts. 🔗 Learn more about SAM 3: https://go.meta.me/4de3d8 2️⃣...

Introducing Meta Omnilingual Automatic Speech Recognition (ASR), a suite of models providing ASR capabilities for over 1,600 languages, including 500 low-coverage languages never before served by any ASR system. While most ASR systems focus on a limited set of languages that...