Poisoning Attack
Poisoning attacks target machine learning models by injecting malicious data into the training process, aiming to degrade model performance or introduce backdoors. Current research focuses on developing sophisticated poisoning strategies for various model architectures, including decision trees, neural networks, and recommender systems, and exploring defenses such as robust aggregation techniques and anomaly detection methods in federated learning settings. Understanding and mitigating these attacks is crucial for ensuring the reliability and security of machine learning systems across diverse applications, from autonomous driving to financial services. The ongoing development of both more effective attacks and more robust defenses highlights the importance of this area of research.
Papers
Nightshade: Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models
Shawn Shan, Wenxin Ding, Josephine Passananti, Stanley Wu, Haitao Zheng, Ben Y. Zhao
FLTracer: Accurate Poisoning Attack Provenance in Federated Learning
Xinyu Zhang, Qingyu Liu, Zhongjie Ba, Yuan Hong, Tianhang Zheng, Feng Lin, Li Lu, Kui Ren