Noisy Label
Noisy label learning (NLL) tackles the challenge of training machine learning models on datasets containing inaccurate labels, a common problem in large-scale data collection. Current research focuses on developing robust algorithms and model architectures, such as vision transformers and graph neural networks, that can effectively mitigate the negative impact of noisy labels, often employing techniques like sample selection, loss function modification, and self-supervised learning. These advancements are crucial for improving the reliability and generalizability of machine learning models across various applications, from image classification and natural language processing to medical image analysis and remote sensing. The ultimate goal is to build more robust and reliable AI systems that can handle the imperfections inherent in real-world data.
Papers
Learning from Noisy Labels via Self-Taught On-the-Fly Meta Loss Rescaling
Michael Heck, Christian Geishauser, Nurul Lubis, Carel van Niekerk, Shutong Feng, Hsien-Chin Lin, Benjamin Matthias Ruppik, Renato Vukovic, Milica Gašić
CRoF: CLIP-based Robust Few-shot Learning on Noisy Labels
Shizhuo Deng, Bowen Han, Jiaqi Chen, Hao Wang, Dongyue Chen, Tong Jia
Robust Testing for Deep Learning using Human Label Noise
Gordon Lim, Stefan Larson, Kevin Leach
In-Context Learning with Noisy Labels
Junyong Kang, Donghyun Son, Hwanjun Song, Buru Chang
Curriculum Fine-tuning of Vision Foundation Model for Medical Image Classification Under Label Noise
Yeonguk Yu, Minhwan Ko, Sungho Shin, Kangmin Kim, Kyoobin Lee
When the Small-Loss Trick is Not Enough: Multi-Label Image Classification with Noisy Labels Applied to CCTV Sewer Inspections
Keryan Chelouche, Marie Lachaize (VERI), Marine Bernard (VERI), Louise Olgiati, Remi Cuingnet
Learning to Generate Diverse Pedestrian Movements from Web Videos with Noisy Labels
Zhizheng Liu, Joe Lin, Wayne Wu, Bolei Zhou