Label Noise
Label noise, the presence of incorrect labels in training datasets, significantly hinders the performance and robustness of machine learning models. Current research focuses on developing methods to mitigate this issue, exploring techniques like loss function modifications, sample selection strategies (e.g., identifying and removing or down-weighting noisy samples), and the use of robust algorithms such as those based on nearest neighbors or contrastive learning, often applied within deep neural networks or gradient boosted decision trees. Addressing label noise is crucial for improving the reliability and generalizability of machine learning models across various applications, from medical image analysis to natural language processing, and is driving the development of new benchmark datasets and evaluation metrics.
Papers
FedDiv: Collaborative Noise Filtering for Federated Learning with Noisy Labels
Jichang Li, Guanbin Li, Hui Cheng, Zicheng Liao, Yizhou Yu
Noise robust distillation of self-supervised speech models via correlation metrics
Fabian Ritter-Gutierrez, Kuan-Po Huang, Dianwen Ng, Jeremy H. M. Wong, Hung-yi Lee, Eng Siong Chng, Nancy F. Chen