Label Perturbation

Label perturbation, the intentional or unintentional alteration of training data labels, is a significant area of research impacting various machine learning models. Current research focuses on understanding the effects of label noise on model performance and robustness, exploring techniques like adaptive label perturbation to improve model calibration and mitigate overfitting, and investigating the impact of label perturbations on explainability in models like graph neural networks and large language models. These studies are crucial for developing more reliable and trustworthy machine learning systems, particularly in safety-critical applications, and for improving our fundamental understanding of how these models learn and generalize.

Papers