Segment Anything Model
The Segment Anything Model (SAM) is a foundational model for image segmentation, aiming to provide a universal solution capable of segmenting any object in any image with minimal user input. Current research focuses on improving SAM's efficiency for resource-constrained environments, adapting it to specific domains like medical imaging and video, and exploring its use in conjunction with other models, such as large language models, for more complex tasks. SAM's strong zero-shot generalization capabilities and flexibility in prompt types are revolutionizing image segmentation, impacting fields ranging from medical diagnosis to autonomous driving through improved annotation efficiency and task performance.
Papers
When SAM Meets Medical Images: An Investigation of Segment Anything Model (SAM) on Multi-phase Liver Tumor Segmentation
Chuanfei Hu, Tianyi Xia, Shenghong Ju, Xinde Li
Learning to "Segment Anything" in Thermal Infrared Images through Knowledge Distillation with a Large Scale Dataset SATIR
Junzhang Chen, Xiangzhi Bai
SAM Struggles in Concealed Scenes -- Empirical Study on "Segment Anything"
Ge-Peng Ji, Deng-Ping Fan, Peng Xu, Ming-Ming Cheng, Bowen Zhou, Luc Van Gool
Segment Anything Is Not Always Perfect: An Investigation of SAM on Different Real-world Applications
Wei Ji, Jingjing Li, Qi Bi, Tingwei Liu, Wenbo Li, Li Cheng
SAMM (Segment Any Medical Model): A 3D Slicer Integration to SAM
Yihao Liu, Jiaming Zhang, Zhangcong She, Amir Kheradmand, Mehran Armand
SAM.MD: Zero-shot medical image segmentation capabilities of the Segment Anything Model
Saikat Roy, Tassilo Wald, Gregor Koehler, Maximilian R. Rokuss, Nico Disch, Julius Holzschuh, David Zimmerer, Klaus H. Maier-Hein
SAM vs BET: A Comparative Study for Brain Extraction and Segmentation of Magnetic Resonance Images using Deep Learning
Sovesh Mohapatra, Advait Gosai, Gottfried Schlaug