Weakly Supervised
Weakly supervised learning aims to train machine learning models with limited or incomplete labeled data, addressing the high cost and time associated with full annotation. Current research focuses on leveraging techniques like pseudo-labeling, self-training, and multiple instance learning, often integrated with architectures such as UNets and transformers, to achieve performance comparable to fully supervised methods across diverse applications. This approach is particularly impactful in domains with scarce labeled data, such as medical image analysis and autonomous driving, enabling the development of robust models while significantly reducing annotation effort.
Papers
January 7, 2025
December 29, 2024
December 28, 2024
December 27, 2024
December 25, 2024
December 17, 2024
December 15, 2024
December 9, 2024
December 5, 2024
November 29, 2024
November 28, 2024
November 27, 2024
November 19, 2024
October 27, 2024
October 25, 2024
October 24, 2024
September 30, 2024
September 3, 2024
August 23, 2024
August 19, 2024