Data Poisoning Attack
Data poisoning attacks involve injecting malicious data into training datasets to compromise the performance or behavior of machine learning models. Current research focuses on understanding the vulnerabilities of various model architectures, including linear solvers, federated learning systems, large language models, and clustering algorithms, to different poisoning strategies (e.g., backdoor attacks, label flipping, feature manipulation). This is a critical area of study because successful data poisoning attacks can severely undermine the reliability and trustworthiness of machine learning systems across numerous applications, from healthcare to autonomous vehicles, necessitating the development of robust defenses.
Papers
March 7, 2023
March 6, 2023
February 20, 2023
February 8, 2023
February 7, 2023
February 5, 2023
January 17, 2023
January 6, 2023
December 21, 2022
December 5, 2022
November 29, 2022
November 23, 2022
November 16, 2022
November 3, 2022
November 2, 2022
October 18, 2022
October 12, 2022
September 30, 2022
August 17, 2022