Label Poisoning Attack

Label poisoning attacks involve subtly manipulating training data to compromise the accuracy or functionality of machine learning models without altering the data labels. Current research focuses on developing both sophisticated attacks, leveraging techniques like catastrophic forgetting and contrastive shortcut injection, and robust defenses, including density-based clustering, diffusion denoising, and alternative aggregation methods like the mean aggregator. This area is crucial because it highlights vulnerabilities in machine learning systems, particularly in security-sensitive applications like cybersecurity and human activity recognition, demanding the development of more resilient models and training processes.

Papers