Segment Anything
Segment Anything (SAM) is a foundational model for image segmentation that aims to segment any object in an image given a simple prompt, such as a point or bounding box. Current research focuses on improving SAM's efficiency, accuracy, and adaptability to various domains and modalities (e.g., medical images, lidar data, video) through techniques like lightweight adapters, prompt refinement strategies, and multi-modal fusion. This versatile model has significant implications for numerous applications, including medical image analysis, autonomous driving, and remote sensing, by enabling efficient and accurate segmentation across diverse data types.
Papers
Better Call SAL: Towards Learning to Segment Anything in Lidar
Aljoša Ošep, Tim Meinhardt, Francesco Ferroni, Neehar Peri, Deva Ramanan, Laura Leal-Taixé
Segment Anything for comprehensive analysis of grapevine cluster architecture and berry properties
Efrain Torres-Lomas, Jimena Lado-Jimena, Guillermo Garcia-Zamora, Luis Diaz-Garcia
EfficientSAM: Leveraged Masked Image Pretraining for Efficient Segment Anything
Yunyang Xiong, Bala Varadarajan, Lemeng Wu, Xiaoyu Xiang, Fanyi Xiao, Chenchen Zhu, Xiaoliang Dai, Dilin Wang, Fei Sun, Forrest Iandola, Raghuraman Krishnamoorthi, Vikas Chandra
Segment Anything Model-guided Collaborative Learning Network for Scribble-supervised Polyp Segmentation
Yiming Zhao, Tao Zhou, Yunqi Gu, Yi Zhou, Yizhe Zhang, Ye Wu, Huazhu Fu