Data Poisoning Attack
Data poisoning attacks involve injecting malicious data into training datasets to compromise the performance or behavior of machine learning models. Current research focuses on understanding the vulnerabilities of various model architectures, including linear solvers, federated learning systems, large language models, and clustering algorithms, to different poisoning strategies (e.g., backdoor attacks, label flipping, feature manipulation). This is a critical area of study because successful data poisoning attacks can severely undermine the reliability and trustworthiness of machine learning systems across numerous applications, from healthcare to autonomous vehicles, necessitating the development of robust defenses.
Papers
April 5, 2024
March 11, 2024
March 5, 2024
February 21, 2024
February 20, 2024
February 13, 2024
December 7, 2023
December 4, 2023
December 3, 2023
November 4, 2023
October 20, 2023
October 11, 2023
September 15, 2023
July 17, 2023
July 4, 2023
July 3, 2023
June 28, 2023
June 2, 2023