Segment Anything Model
The Segment Anything Model (SAM) is a foundational model for image segmentation, aiming to provide a universal solution capable of segmenting any object in any image with minimal user input. Current research focuses on improving SAM's efficiency for resource-constrained environments, adapting it to specific domains like medical imaging and video, and exploring its use in conjunction with other models, such as large language models, for more complex tasks. SAM's strong zero-shot generalization capabilities and flexibility in prompt types are revolutionizing image segmentation, impacting fields ranging from medical diagnosis to autonomous driving through improved annotation efficiency and task performance.
Papers
SAMAug: Point Prompt Augmentation for Segment Anything Model
Haixing Dai, Chong Ma, Zhiling Yan, Zhengliang Liu, Enze Shi, Yiwei Li, Peng Shu, Xiaozheng Wei, Lin Zhao, Zihao Wu, Fang Zeng, Dajiang Zhu, Wei Liu, Quanzheng Li, Lichao Sun, Shu Zhang Tianming Liu, Xiang Li
SAM-DA: UAV Tracks Anything at Night with SAM-Powered Domain Adaptation
Changhong Fu, Liangliang Yao, Haobo Zuo, Guangze Zheng, Jia Pan
RefSAM: Efficiently Adapting Segmenting Anything Model for Referring Video Object Segmentation
Yonglin Li, Jing Zhang, Xiao Teng, Long Lan, Xinwang Liu