Segment Anything Model
The Segment Anything Model (SAM) is a foundational model for image segmentation, aiming to provide a universal solution capable of segmenting any object in any image with minimal user input. Current research focuses on improving SAM's efficiency for resource-constrained environments, adapting it to specific domains like medical imaging and video, and exploring its use in conjunction with other models, such as large language models, for more complex tasks. SAM's strong zero-shot generalization capabilities and flexibility in prompt types are revolutionizing image segmentation, impacting fields ranging from medical diagnosis to autonomous driving through improved annotation efficiency and task performance.
Papers
ESP-MedSAM: Efficient Self-Prompting SAM for Universal Domain-Generalized Medical Image Segmentation
Qing Xu, Jiaxuan Li, Xiangjian He, Ziyu Liu, Zhen Chen, Wenting Duan, Chenxin Li, Maggie M. He, Fiseha B. Tesema, Wooi P. Cheah, Yi Wang, Rong Qu, Jonathan M. Garibaldi
Seismic Fault SAM: Adapting SAM with Lightweight Modules and 2.5D Strategy for Fault Detection
Ran Chen, Zeren Zhang, Jinwen Ma
Virtually Objective Quantification of in vitro Wound Healing Scratch Assays with the Segment Anything Model
Katja Löwenstein, Johanna Rehrl, Anja Schuster, Michael Gadermayr
HRSAM: Efficiently Segment Anything in High-Resolution Images
You Huang, Wenbin Lai, Jiayi Ji, Liujuan Cao, Shengchuan Zhang, Rongrong Ji
SAVE: Segment Audio-Visual Easy way using Segment Anything Model
Khanh-Binh Nguyen, Chae Jung Park