Backdoor Poisoning Attack

Backdoor poisoning attacks exploit vulnerabilities in machine learning models by injecting malicious data into training datasets. Current research focuses on developing more sophisticated attacks, particularly clean-label attacks that avoid obvious label inconsistencies, and exploring various trigger mechanisms, including physical triggers and transformations like rotation, across diverse model architectures such as deep neural networks, graph neural networks, and reinforcement learning agents. The significance of this research lies in its implications for the security and trustworthiness of machine learning systems across numerous applications, from image recognition and speech processing to critical infrastructure control, necessitating the development of robust defenses.

Papers