Noisy Label
Noisy label learning (NLL) tackles the challenge of training machine learning models on datasets containing inaccurate labels, a common problem in large-scale data collection. Current research focuses on developing robust algorithms and model architectures, such as vision transformers and graph neural networks, that can effectively mitigate the negative impact of noisy labels, often employing techniques like sample selection, loss function modification, and self-supervised learning. These advancements are crucial for improving the reliability and generalizability of machine learning models across various applications, from image classification and natural language processing to medical image analysis and remote sensing. The ultimate goal is to build more robust and reliable AI systems that can handle the imperfections inherent in real-world data.
Papers
UDAMA: Unsupervised Domain Adaptation through Multi-discriminator Adversarial Training with Noisy Labels Improves Cardio-fitness Prediction
Yu Wu, Dimitris Spathis, Hong Jia, Ignacio Perez-Pozuelo, Tomas Gonzales, Soren Brage, Nicholas Wareham, Cecilia Mascolo
LaplaceConfidence: a Graph-based Approach for Learning with Noisy Labels
Mingcai Chen, Yuntao Du, Wei Tang, Baoming Zhang, Hao Cheng, Shuwei Qian, Chongjun Wang
LNL+K: Learning with Noisy Labels and Noise Source Distribution Knowledge
Siqi Wang, Bryan A. Plummer
FedNoisy: Federated Noisy Label Learning Benchmark
Siqi Liang, Jintao Huang, Junyuan Hong, Dun Zeng, Jiayu Zhou, Zenglin Xu
MILD: Modeling the Instance Learning Dynamics for Learning with Noisy Labels
Chuanyang Hu, Shipeng Yan, Zhitong Gao, Xuming He