Segmentation Model
Segmentation models aim to partition images into meaningful regions, a crucial task across diverse fields like medical imaging and autonomous driving. Current research emphasizes improving model robustness and efficiency, focusing on architectures like U-Nets, Transformers, and diffusion models, often incorporating techniques like continual learning and prompt engineering to adapt to new data or tasks with minimal retraining. These advancements are driving improvements in accuracy and reducing the need for extensive labeled datasets, leading to wider applicability in various scientific and industrial applications.
Papers
SELMA3D challenge: Self-supervised learning for 3D light-sheet microscopy image segmentation
Ying Chen, Rami Al-Maskari, Izabela Horvath, Mayar Ali, Luciano Höher, Kaiyuan Yang, Zengming Lin, Zhiwei Zhai, Mengzhe Shen, Dejin Xun, Yi Wang, Tony Xu, Maged Goubran, Yunheng Wu, Ali Erturk, Johannes C. Paetzold
Image Segmentation: Inducing graph-based learning
Aryan Singh, Pepijn Van de Ven, Ciarán Eising, Patrick Denny
SEG-SAM: Semantic-Guided SAM for Unified Medical Image Segmentation
Shuangping Huang, Hao Liang, Qingfeng Wang, Chulong Zhong, Zijian Zhou, Miaojing Shi
SAModified: A Foundation Model-Based Zero-Shot Approach for Refining Noisy Land-Use Land-Cover Maps
Sparsh Pekhale, Rakshith Sathish, Sathisha Basavaraju, Divya Sharma