Weakly Supervised
Weakly supervised learning aims to train machine learning models with limited or incomplete labeled data, addressing the high cost and time associated with full annotation. Current research focuses on leveraging techniques like pseudo-labeling, self-training, and multiple instance learning, often integrated with architectures such as UNets and transformers, to achieve performance comparable to fully supervised methods across diverse applications. This approach is particularly impactful in domains with scarce labeled data, such as medical image analysis and autonomous driving, enabling the development of robust models while significantly reducing annotation effort.
Papers
LNQ 2023 challenge: Benchmark of weakly-supervised techniques for mediastinal lymph node quantification
Reuben Dorent, Roya Khajavi, Tagwa Idris, Erik Ziegler, Bhanusupriya Somarouthu, Heather Jacene, Ann LaCasce, Jonathan Deissler, Jan Ehrhardt, Sofija Engelson, Stefan M. Fischer, Yun Gu, Heinz Handels, Satoshi Kasai, Satoshi Kondo, Klaus Maier-Hein, Julia A. Schnabel, Guotai Wang, Litingyu Wang, Tassilo Wald, Guang-Zhong Yang, Hanxiao Zhang, Minghui Zhang, Steve Pieper, Gordon Harris, Ron Kikinis, Tina Kapur
Dynamic Label Injection for Imbalanced Industrial Defect Segmentation
Emanuele Caruso, Francesco Pelosin, Alessandro Simoni, Marco Boschetti
Weakly Supervised Pretraining and Multi-Annotator Supervised Finetuning for Facial Wrinkle Detection
Ik Jun Moon, Junho Moon, Ikbeom Jang