Data Poisoning Attack
Data poisoning attacks involve injecting malicious data into training datasets to compromise the performance or behavior of machine learning models. Current research focuses on understanding the vulnerabilities of various model architectures, including linear solvers, federated learning systems, large language models, and clustering algorithms, to different poisoning strategies (e.g., backdoor attacks, label flipping, feature manipulation). This is a critical area of study because successful data poisoning attacks can severely undermine the reliability and trustworthiness of machine learning systems across numerous applications, from healthcare to autonomous vehicles, necessitating the development of robust defenses.
Papers
August 5, 2022
July 18, 2022
June 8, 2022
May 26, 2022
April 23, 2022
April 19, 2022
April 12, 2022
March 8, 2022
March 4, 2022
February 21, 2022
February 5, 2022
January 3, 2022
December 6, 2021
November 11, 2021