Backdoor Attack
Backdoor attacks exploit vulnerabilities in machine learning models by embedding hidden triggers during training, causing the model to produce malicious outputs when the trigger is present. Current research focuses on developing and mitigating these attacks across various model architectures, including deep neural networks, vision transformers, graph neural networks, large language models, and spiking neural networks, with a particular emphasis on understanding attack mechanisms and developing robust defenses in federated learning and generative models. The significance of this research lies in ensuring the trustworthiness and security of increasingly prevalent machine learning systems across diverse applications, ranging from object detection and medical imaging to natural language processing and autonomous systems.
Papers
CorruptEncoder: Data Poisoning based Backdoor Attacks to Contrastive Learning
Jinghuai Zhang, Hongbin Liu, Jinyuan Jia, Neil Zhenqiang Gong
Backdoor Attacks for Remote Sensing Data with Wavelet Transform
Nikolaus Dräger, Yonghao Xu, Pedram Ghamisi
Backdoor Attacks on Time Series: A Generative Approach
Yujing Jiang, Xingjun Ma, Sarah Monazam Erfani, James Bailey
Untargeted Backdoor Attack against Object Detection
Chengxiao Luo, Yiming Li, Yong Jiang, Shu-Tao Xia
Dormant Neural Trojans
Feisi Fu, Panagiota Kiourti, Wenchao Li
BATT: Backdoor Attack with Transformation-based Triggers
Tong Xu, Yiming Li, Yong Jiang, Shu-Tao Xia
Backdoor Defense via Suppressing Model Shortcuts
Sheng Yang, Yiming Li, Yong Jiang, Shu-Tao Xia
Backdoor Attack and Defense in Federated Generative Adversarial Network-based Medical Image Synthesis
Ruinan Jin, Xiaoxiao Li
Training set cleansing of backdoor poisoning by self-supervised representation learning
H. Wang, S. Karami, O. Dia, H. Ritter, E. Emamjomeh-Zadeh, J. Chen, Z. Xiang, D. J. Miller, G. Kesidis