Label Poisoning Attack
Label poisoning attacks involve subtly manipulating training data to compromise the accuracy or functionality of machine learning models without altering the data labels. Current research focuses on developing both sophisticated attacks, leveraging techniques like catastrophic forgetting and contrastive shortcut injection, and robust defenses, including density-based clustering, diffusion denoising, and alternative aggregation methods like the mean aggregator. This area is crucial because it highlights vulnerabilities in machine learning systems, particularly in security-sensitive applications like cybersecurity and human activity recognition, demanding the development of more resilient models and training processes.
Papers
July 11, 2024
June 18, 2024
April 21, 2024
April 19, 2024
March 30, 2024
March 20, 2024
March 18, 2024
June 21, 2023
August 17, 2022
April 27, 2022