Segment Anything Model
The Segment Anything Model (SAM) is a foundational model for image segmentation, aiming to provide a universal solution capable of segmenting any object in any image with minimal user input. Current research focuses on improving SAM's efficiency for resource-constrained environments, adapting it to specific domains like medical imaging and video, and exploring its use in conjunction with other models, such as large language models, for more complex tasks. SAM's strong zero-shot generalization capabilities and flexibility in prompt types are revolutionizing image segmentation, impacting fields ranging from medical diagnosis to autonomous driving through improved annotation efficiency and task performance.
Papers
Segment Anything Model for Grain Characterization in Hard Drive Design
Kai Nichols, Matthew Hauwiller, Nicholas Propes, Shaowei Wu, Stephanie Hernandez, Mike Kautzky
The 2nd Solution for LSVOS Challenge RVOS Track: Spatial-temporal Refinement for Consistent Semantic Segmentation
Tuyen Tran
Generalized SAM: Efficient Fine-Tuning of SAM for Variable Input Image Sizes
Sota Kato, Hinako Mitsuoka, Kazuhiro Hotta
SAM-SP: Self-Prompting Makes SAM Great Again
Chunpeng Zhou, Kangjie Ning, Qianqian Shen, Sheng Zhou, Zhi Yu, Haishuai Wang
Video Object Segmentation via SAM 2: The 4th Solution for LSVOS Challenge VOS Track
Feiyu Pan, Hao Fang, Runmin Cong, Wei Zhang, Xiankai Lu
SAM-UNet:Enhancing Zero-Shot Segmentation of SAM for Universal Medical Images
Sihan Yang, Haixia Bi, Hai Zhang, Jian Sun
Segment-Anything Models Achieve Zero-shot Robustness in Autonomous Driving
Jun Yan, Pengyu Wang, Danni Wang, Weiquan Huang, Daniel Watzenig, Huilin Yin
SAM2-UNet: Segment Anything 2 Makes Strong Encoder for Natural and Medical Image Segmentation
Xinyu Xiong, Zihuang Wu, Shuangyi Tan, Wenxue Li, Feilong Tang, Ying Chen, Siying Li, Jie Ma, Guanbin Li
Extracting polygonal footprints in off-nadir images with Segment Anything Model
Kai Li, Jingbo Chen, Yupeng Deng, Yu Meng, Diyou Liu, Junxian Ma, Chenhao Wang, Xiangyu Zhao
Tuning a SAM-Based Model with Multi-Cognitive Visual Adapter to Remote Sensing Instance Segmentation
Linghao Zheng, Xinyang Pu, Feng Xu
Prompt-Based Segmentation at Multiple Resolutions and Lighting Conditions using Segment Anything Model 2
Osher Rafaeli, Tal Svoray, Roni Blushtein-Livnon, Ariel Nahlieli
Towards Cross-Domain Single Blood Cell Image Classification via Large-Scale LoRA-based Segment Anything Model
Yongcheng Li, Lingcong Cai, Ying Lu, Yupeng Zhang, Jingyan Jiang, Genan Dai, Bowen Zhang, Jingzhou Cao, Xiangzhong Zhang, Xiaomao Fan
S-SAM: SVD-based Fine-Tuning of Segment Anything Model for Medical Image Segmentation
Jay N. Paranjape, Shameema Sikder, S. Swaroop Vedula, Vishal M. Patel
From SAM to SAM 2: Exploring Improvements in Meta's Segment Anything Model
Athulya Sundaresan Geetha, Muhammad Hussain
Zero-shot 3D Segmentation of Abdominal Organs in CT Scans Using Segment Anything Model 2
Yosuke Yamagishi, Shouhei Hanaoka, Tomohiro Kikuchi, Takahiro Nakao, Yuta Nakamura, Yukihiro Nomura, Soichiro Miki, Takeharu Yoshikawa, Osamu Abe
Polyp SAM 2: Advancing Zero shot Polyp Segmentation in Colorectal Cancer Detection
Mobina Mansoori, Sajjad Shahabodini, Jamshid Abouei, Konstantinos N. Plataniotis, Arash Mohammadi