Segment Anything Model
The Segment Anything Model (SAM) is a foundational model for image segmentation, aiming to provide a universal solution capable of segmenting any object in any image with minimal user input. Current research focuses on improving SAM's efficiency for resource-constrained environments, adapting it to specific domains like medical imaging and video, and exploring its use in conjunction with other models, such as large language models, for more complex tasks. SAM's strong zero-shot generalization capabilities and flexibility in prompt types are revolutionizing image segmentation, impacting fields ranging from medical diagnosis to autonomous driving through improved annotation efficiency and task performance.
Papers
Annotation-Efficient Task Guidance for Medical Segment Anything
Tyler Ward, Abdullah-Al-Zubaer Imran
SAM-Mamba: Mamba Guided SAM Architecture for Generalized Zero-Shot Polyp Segmentation
Tapas Kumar Dutta, Snehashis Majhi, Deepak Ranjan Nayak, Debesh Jha
Lightweight Method for Interactive 3D Medical Image Segmentation with Multi-Round Result Fusion
Bingzhi Shen, Lufan Chang, Siqi Chen, Shuxiang Guo, Hao Liu
Quantifying the Limits of Segment Anything Model: Analyzing Challenges in Segmenting Tree-Like and Low-Contrast Structures
Yixin Zhang, Nicholas Konz, Kevin Kramer, Maciej A. Mazurowski
Customize Segment Anything Model for Multi-Modal Semantic Segmentation with Mixture of LoRA Experts
Chenyang Zhu, Bin Xiao, Lin Shi, Shoukun Xu, Xu Zheng