Data Poisoning Attack

Data poisoning attacks involve injecting malicious data into training datasets to compromise the performance or behavior of machine learning models. Current research focuses on understanding the vulnerabilities of various model architectures, including linear solvers, federated learning systems, large language models, and clustering algorithms, to different poisoning strategies (e.g., backdoor attacks, label flipping, feature manipulation). This is a critical area of study because successful data poisoning attacks can severely undermine the reliability and trustworthiness of machine learning systems across numerous applications, from healthcare to autonomous vehicles, necessitating the development of robust defenses.

Papers