Label Poisoning

Label poisoning attacks compromise machine learning models by subtly manipulating training data labels, without altering the data itself, to induce specific misclassifications. Current research focuses on developing both sophisticated attacks, including those targeting graph convolutional networks and leveraging universal adversarial perturbations, and robust defenses, such as diffusion-based denoising and sparse data purification techniques. This area is crucial because it highlights vulnerabilities in models trained on potentially untrusted data, impacting the reliability of applications ranging from activity recognition to image classification and necessitating the development of more resilient machine learning systems.

Papers