Segment Anything Model
The Segment Anything Model (SAM) is a foundational model for image segmentation, aiming to provide a universal solution capable of segmenting any object in any image with minimal user input. Current research focuses on improving SAM's efficiency for resource-constrained environments, adapting it to specific domains like medical imaging and video, and exploring its use in conjunction with other models, such as large language models, for more complex tasks. SAM's strong zero-shot generalization capabilities and flexibility in prompt types are revolutionizing image segmentation, impacting fields ranging from medical diagnosis to autonomous driving through improved annotation efficiency and task performance.
Papers
Zero-Shot Digital Rock Image Segmentation with a Fine-Tuned Segment Anything Model
Zhaoyang Ma, Xupeng He, Shuyu Sun, Bicheng Yan, Hyung Kwak, Jun Gao
Enhancing the Reliability of Segment Anything Model for Auto-Prompting Medical Image Segmentation with Uncertainty Rectification
Yichi Zhang, Shiyao Hu, Sijie Ren, Chen Jiang, Yuan Cheng, Yuan Qi
SAMIHS: Adaptation of Segment Anything Model for Intracranial Hemorrhage Segmentation
Yinuo Wang, Kai Chen, Weimin Yuan, Cai Meng, XiangZhi Bai
Zero-Shot Segmentation of Eye Features Using the Segment Anything Model (SAM)
Virmarie Maquiling, Sean Anthony Byrne, Diederick C. Niehorster, Marcus Nyström, Enkelejda Kasneci
SAM-CLIP: Merging Vision Foundation Models towards Semantic and Spatial Understanding
Haoxiang Wang, Pavan Kumar Anasosalu Vasu, Fartash Faghri, Raviteja Vemulapalli, Mehrdad Farajtabar, Sachin Mehta, Mohammad Rastegari, Oncel Tuzel, Hadi Pouransari
SAM-Med3D: Towards General-purpose Segmentation Models for Volumetric Medical Images
Haoyu Wang, Sizheng Guo, Jin Ye, Zhongying Deng, Junlong Cheng, Tianbin Li, Jianpin Chen, Yanzhou Su, Ziyan Huang, Yiqing Shen, Bin Fu, Shaoting Zhang, Junjun He, Yu Qiao