SAM 3 could accelerate computer-vision tasks and content-creation workflows, enabling faster product features in Meta’s apps and broader applications in wearables, robotics and entertainment. Its multi-modal prompting and refinement capabilities lower the barrier to large-scale segmentation and tracking adoption.
Meta unveiled Segment Anything Model 3 (SAM 3), a unified model that combines detection, segmentation and tracking for images and video. Building on click prompting from previous versions, SAM 3 introduces text prompting and visual prompting to detect and segment multiple objects of a given category at once, plus follow-up prompts for refining predictions. The model aims to eliminate manual per-object segmentation and speed workflows across use cases. Meta is making SAM 3 available in the Segment Anything playground and is already using it for new effects in Instagram’s edits app.
Comments
Want to join the conversation?
Loading comments...