Label Poisoning
Label poisoning attacks compromise machine learning models by subtly manipulating training data labels, without altering the data itself, to induce specific misclassifications. Current research focuses on developing both sophisticated attacks, including those targeting graph convolutional networks and leveraging universal adversarial perturbations, and robust defenses, such as diffusion-based denoising and sparse data purification techniques. This area is crucial because it highlights vulnerabilities in models trained on potentially untrusted data, impacting the reliability of applications ranging from activity recognition to image classification and necessitating the development of more resilient machine learning systems.
Papers
October 14, 2024
June 21, 2024
April 19, 2024
March 18, 2024
October 29, 2023
January 5, 2023