Segment Anything Model
The Segment Anything Model (SAM) is a foundational model for image segmentation, aiming to provide a universal solution capable of segmenting any object in any image with minimal user input. Current research focuses on improving SAM's efficiency for resource-constrained environments, adapting it to specific domains like medical imaging and video, and exploring its use in conjunction with other models, such as large language models, for more complex tasks. SAM's strong zero-shot generalization capabilities and flexibility in prompt types are revolutionizing image segmentation, impacting fields ranging from medical diagnosis to autonomous driving through improved annotation efficiency and task performance.
Papers
FastSAM3D: An Efficient Segment Anything Model for 3D Volumetric Medical Images
Yiqing Shen, Jingxing Li, Xinyuan Shao, Blanca Inigo Romillo, Ankush Jindal, David Dreizin, Mathias Unberath
WSI-SAM: Multi-resolution Segment Anything Model (SAM) for histopathology whole-slide images
Hong Liu, Haosen Yang, Paul J. van Diest, Josien P. W. Pluim, Mitko Veta
Customizing Segmentation Foundation Model via Prompt Learning for Instance Segmentation
Hyung-Il Kim, Kimin Yun, Jun-Seok Yun, Yuseok Bae
SAM-Lightening: A Lightweight Segment Anything Model with Dilated Flash Attention to Achieve 30 times Acceleration
Yanfei Song, Bangzheng Pu, Peng Wang, Hongxu Jiang, Dong Dong, Yongxiang Cao, Yiqing Shen
VRP-SAM: SAM with Visual Reference Prompt
Yanpeng Sun, Jiahui Chen, Shan Zhang, Xinyu Zhang, Qiang Chen, Gang Zhang, Errui Ding, Jingdong Wang, Zechao Li
Robust Unsupervised Crowd Counting and Localization with Adaptive Resolution SAM
Jia Wan, Qiangqiang Wu, Wei Lin, Antoni B. Chan
Segment anything model for head and neck tumor segmentation with CT, PET and MRI multi-modality images
Jintao Ren, Mathis Rasmussen, Jasper Nijkamp, Jesper Grau Eriksen, Stine Korreman
SAM-DiffSR: Structure-Modulated Diffusion Model for Image Super-Resolution
Chengcheng Wang, Zhiwei Hao, Yehui Tang, Jianyuan Guo, Yujie Yang, Kai Han, Yunhe Wang