Segment Anything Model
The Segment Anything Model (SAM) is a foundational model for image segmentation, aiming to provide a universal solution capable of segmenting any object in any image with minimal user input. Current research focuses on improving SAM's efficiency for resource-constrained environments, adapting it to specific domains like medical imaging and video, and exploring its use in conjunction with other models, such as large language models, for more complex tasks. SAM's strong zero-shot generalization capabilities and flexibility in prompt types are revolutionizing image segmentation, impacting fields ranging from medical diagnosis to autonomous driving through improved annotation efficiency and task performance.
Papers
PointSAM: Pointly-Supervised Segment Anything Model for Remote Sensing Images
Nanqing Liu, Xun Xu, Yongyi Su, Haojie Zhang, Heng-Chao Li
MCICSAM: Monte Carlo-guided Interpolation Consistency Segment Anything Model for Semi-Supervised Prostate Zone Segmentation
Guantian Huang, Beibei Li, Xiaobing Fan, Aritrick Chatterjee, Cheng Wei, Shouliang Qi, Wei Qian, Dianning He
An Augmentation-based Model Re-adaptation Framework for Robust Image Segmentation
Zheming Zuo, Joseph Smith, Jonathan Stonehouse, Boguslaw Obara
Privacy-Preserving SAM Quantization for Efficient Edge Intelligence in Healthcare
Zhikai Li, Jing Zhang, Qingyi Gu
SAM-OCTA2: Layer Sequence OCTA Segmentation with Fine-tuned Segment Anything Model 2
Xinrun Chen, Chengliang Wang, Haojian Ning, Mengzhan Zhang, Mei Shen, Shiying Li