Segment Anything Model (SAM)
image
open-weight
Meta AI’s Segment Anything Model (SAM, 2023) is an open-source image segmentation model. It was trained on Meta’s SA-1B ...
Version: 1.0
Released: 2y 6m 27d ago on 04/05/2023
Architecture
- parameters: ≈636M (VisionTransformer image encoder)
- context_length: Prompt-based (points/boxes)
- training_data: SA-1B dataset: 1B masks on 11M images
- inference: Vision Transformer + prompt encoder + mask decoder
Capabilities
- Promptable image segmentation: generates object masks from various prompts (points, boxes, text)
- Strong zero-shot performance across diverse datasets without fine-tuning
Benchmarks
- Zero-shotSegmentation: Competitive with fully supervised models on various datasets
Safety
- Open model
- users should consider potential biases in training data affecting segmentation outputs.
Deployment
- regions: private
- hosting: HuggingFace, GitHub
- integrations: integrated into various computer vision pipelines
Tags
segmentationcomputer-visionopen-sourcevision-transformer