Poisoned Data
Poisoned data, the malicious injection of corrupted samples into training datasets, poses a significant threat to the reliability and security of machine learning models. Current research focuses on developing robust defenses, including methods that leverage self-supervised learning, filter poisoned samples based on backdoor attack characteristics, and selectively "unlearn" poisoned data from already trained models. These efforts are crucial for ensuring the trustworthiness of machine learning systems across various applications, particularly in sensitive domains like healthcare and autonomous driving, where model robustness is paramount.
Papers
September 13, 2024
June 23, 2024
June 13, 2024
May 1, 2024
April 17, 2024
April 8, 2024
February 26, 2024
December 20, 2023
October 10, 2023
January 6, 2023
November 3, 2022
April 29, 2022