Noisy Label
Noisy label learning (NLL) tackles the challenge of training machine learning models on datasets containing inaccurate labels, a common problem in large-scale data collection. Current research focuses on developing robust algorithms and model architectures, such as vision transformers and graph neural networks, that can effectively mitigate the negative impact of noisy labels, often employing techniques like sample selection, loss function modification, and self-supervised learning. These advancements are crucial for improving the reliability and generalizability of machine learning models across various applications, from image classification and natural language processing to medical image analysis and remote sensing. The ultimate goal is to build more robust and reliable AI systems that can handle the imperfections inherent in real-world data.
Papers
Noisy Label Classification using Label Noise Selection with Test-Time Augmentation Cross-Entropy and NoiseMix Learning
Hansang Lee, Haeil Lee, Helen Hong, Junmo Kim
Denoising after Entropy-based Debiasing A Robust Training Method for Dataset Bias with Noisy Labels
Sumyeong Ahn, Se-Young Yun
Inconsistency Ranking-based Noisy Label Detection for High-quality Data
Ruibin Yuan, Hanzhi Yin, Yi Wang, Yifan He, Yushi Ye, Lei Zhang, Zhizheng Wu
Turning Silver into Gold: Domain Adaptation with Noisy Labels for Wearable Cardio-Respiratory Fitness Prediction
Yu Wu, Dimitris Spathis, Hong Jia, Ignacio Perez-Pozuelo, Tomas I. Gonzales, Soren Brage, Nicholas Wareham, Cecilia Mascolo
When Noisy Labels Meet Long Tail Dilemmas: A Representation Calibration Method
Manyi Zhang, Xuyang Zhao, Jun Yao, Chun Yuan, Weiran Huang
SplitNet: Learnable Clean-Noisy Label Splitting for Learning with Noisy Labels
Daehwan Kim, Kwangrok Ryoo, Hansang Cho, Seungryong Kim