Noisy Label
Noisy label learning (NLL) tackles the challenge of training machine learning models on datasets containing inaccurate labels, a common problem in large-scale data collection. Current research focuses on developing robust algorithms and model architectures, such as vision transformers and graph neural networks, that can effectively mitigate the negative impact of noisy labels, often employing techniques like sample selection, loss function modification, and self-supervised learning. These advancements are crucial for improving the reliability and generalizability of machine learning models across various applications, from image classification and natural language processing to medical image analysis and remote sensing. The ultimate goal is to build more robust and reliable AI systems that can handle the imperfections inherent in real-world data.
Papers
When the Small-Loss Trick is Not Enough: Multi-Label Image Classification with Noisy Labels Applied to CCTV Sewer Inspections
Keryan Chelouche, Marie Lachaize (VERI), Marine Bernard (VERI), Louise Olgiati, Remi Cuingnet
Learning to Generate Diverse Pedestrian Movements from Web Videos with Noisy Labels
Zhizheng Liu, Joe Lin, Wayne Wu, Bolei Zhou
An Embedding is Worth a Thousand Noisy Labels
Francesco Di Salvo, Sebastian Doerrich, Ines Rieger, Christian Ledig
May the Forgetting Be with You: Alternate Replay for Learning with Noisy Labels
Monica Millunzi, Lorenzo Bonicelli, Angelo Porrello, Jacopo Credi, Petter N. Kolm, Simone Calderara
Theoretical Proportion Label Perturbation for Learning from Label Proportions in Large Bags
Shunsuke Kubo, Shinnosuke Matsuo, Daiki Suehiro, Kazuhiro Terada, Hiroaki Ito, Akihiko Yoshizawa, Ryoma Bise