Semantic Segmentation Model
Semantic segmentation models aim to assign a semantic label to every pixel in an image, enabling detailed scene understanding. Current research emphasizes improving model robustness against various challenges, including adverse weather conditions, limited labeled data (through techniques like weak supervision and active learning), and adversarial attacks, often leveraging architectures like U-Net and transformers. These advancements are crucial for applications ranging from autonomous driving and robotics to remote sensing and medical image analysis, driving progress in both model efficiency and accuracy.
Papers
High-resolution semantically-consistent image-to-image translation
Mikhail Sokolov, Christopher Henry, Joni Storie, Christopher Storie, Victor Alhassan, Mathieu Turgeon-Pelchat
Certified Defences Against Adversarial Patch Attacks on Semantic Segmentation
Maksym Yatsura, Kaspar Sakmann, N. Grace Hua, Matthias Hein, Jan Hendrik Metzen
SCIM: Simultaneous Clustering, Inference, and Mapping for Open-World Semantic Scene Understanding
Hermann Blum, Marcus G. Müller, Abel Gawel, Roland Siegwart, Cesar Cadena
A Simple Approach for Visual Rearrangement: 3D Mapping and Semantic Search
Brandon Trabucco, Gunnar Sigurdsson, Robinson Piramuthu, Gaurav S. Sukhatme, Ruslan Salakhutdinov
Distribution Regularized Self-Supervised Learning for Domain Adaptation of Semantic Segmentation
Javed Iqbal, Hamza Rawal, Rehan Hafiz, Yu-Tseh Chi, Mohsen Ali
Distortion-Aware Network Pruning and Feature Reuse for Real-time Video Segmentation
Hyunsu Rhee, Dongchan Min, Sunil Hwang, Bruno Andreis, Sung Ju Hwang