Segmentation Model
Segmentation models aim to partition images into meaningful regions, a crucial task across diverse fields like medical imaging and autonomous driving. Current research emphasizes improving model robustness and efficiency, focusing on architectures like U-Nets, Transformers, and diffusion models, often incorporating techniques like continual learning and prompt engineering to adapt to new data or tasks with minimal retraining. These advancements are driving improvements in accuracy and reducing the need for extensive labeled datasets, leading to wider applicability in various scientific and industrial applications.
Papers
Hybrid diffusion models: combining supervised and generative pretraining for label-efficient fine-tuning of segmentation models
Bruno Sauvalle, Mathieu Salzmann
Biomedical Image Segmentation: A Systematic Literature Review of Deep Learning Based Object Detection Methods
Fazli Wahid, Yingliang Ma, Dawar Khan, Muhammad Aamir, Syed U. K. Bukhari
Dimensionality Reduction and Nearest Neighbors for Improving Out-of-Distribution Detection in Medical Image Segmentation
McKell Woodland, Nihil Patel, Austin Castelo, Mais Al Taie, Mohamed Eltaher, Joshua P. Yung, Tucker J. Netherton, Tiffany L. Calderone, Jessica I. Sanchez, Darrel W. Cleere, Ahmed Elsaiey, Nakul Gupta, David Victor, Laura Beretta, Ankit B. Patel, Kristy K. Brock
Estimating Pore Location of PBF-LB/M Processes with Segmentation Models
Hans Aoyang Zhou, Jan Theunissen, Marco Kemmerling, Anas Abdelrazeq, Johannes Henrich Schleifenbaum, Robert H. Schmitt