Introducing Meta Segment Anything Model 3 (SAM 3): Unified Detection, Segmentation & Tracking

AI at Meta
AI at MetaNov 19, 2025

Why It Matters

SAM 3 could accelerate computer-vision tasks and content-creation workflows, enabling faster product features in Meta’s apps and broader applications in wearables, robotics and entertainment. Its multi-modal prompting and refinement capabilities lower the barrier to large-scale segmentation and tracking adoption.

Summary

Meta unveiled Segment Anything Model 3 (SAM 3), a unified model that combines detection, segmentation and tracking for images and video. Building on click prompting from previous versions, SAM 3 introduces text prompting and visual prompting to detect and segment multiple objects of a given category at once, plus follow-up prompts for refining predictions. The model aims to eliminate manual per-object segmentation and speed workflows across use cases. Meta is making SAM 3 available in the Segment Anything playground and is already using it for new effects in Instagram’s edits app.

Original Description

Meet SAM 3, a unified model that enables detection, segmentation, and tracking of objects across images and videos. SAM 3 introduces some of our most highly requested features like text and exemplar prompts to segment all objects of a target category.
Learnings from SAM 3 will help power new features in Instagram Edits and Vibes, bringing advanced segmentation capabilities directly to creators.
We’re sharing SAM 3 under the SAM License so others can use it to build their own experiences
🔗 Learn more: https://go.meta.me/f987cd
--
Learn more about our work: https://ai.meta.com
Follow us on Twitter: https://twitter.com/aiatmeta
Follow us on Facebook: https://www.facebook.com/aiatmeta
Connect with us on LinkedIn: https://www.linkedin.com/showcase/aiatmeta/
Meta focuses on bringing the world together by advancing AI, powering meaningful and safe experiences, and conducting open research.

Comments

Want to join the conversation?

Loading comments...