Data Poisoning

Data poisoning attacks involve manipulating training data to compromise the performance or security of machine learning models. Current research focuses on understanding the vulnerabilities of various model architectures, including large language models, graph neural networks, and federated learning systems, to different poisoning strategies, such as backdoor attacks and indiscriminate corruption. This is a critical area of study because data poisoning poses significant risks to the reliability and trustworthiness of AI systems across diverse applications, from power grids to healthcare diagnostics and autonomous driving. Effective defenses are actively being developed, but the arms race between attackers and defenders continues to drive research in this field.

Papers