Backdoor Attack
Backdoor attacks exploit vulnerabilities in machine learning models by embedding hidden triggers during training, causing the model to produce malicious outputs when the trigger is present. Current research focuses on developing and mitigating these attacks across various model architectures, including deep neural networks, vision transformers, graph neural networks, large language models, and spiking neural networks, with a particular emphasis on understanding attack mechanisms and developing robust defenses in federated learning and generative models. The significance of this research lies in ensuring the trustworthiness and security of increasingly prevalent machine learning systems across diverse applications, ranging from object detection and medical imaging to natural language processing and autonomous systems.
Papers
Semantic Shield: Defending Vision-Language Models Against Backdooring and Poisoning via Fine-grained Knowledge Alignment
Alvi Md Ishmam, Christopher Thomas
LoBAM: LoRA-Based Backdoor Attack on Model Merging
Ming Yin, Jingyang Zhang, Jingwei Sun, Minghong Fang, Hai Li, Yiran Chen
Twin Trigger Generative Networks for Backdoor Attacks against Object Detection
Zhiying Li, Zhi Liu, Guanggang Geng, Shreyank N Gowda, Shuyuan Lin, Jian Weng, Xiaobo Jin
When Backdoors Speak: Understanding LLM Backdoor Attacks Through Model-Generated Explanations
Huaizhi Ge, Yiming Li, Qifan Wang, Yongfeng Zhang, Ruixiang Tang
DeTrigger: A Gradient-Centric Approach to Backdoor Attack Mitigation in Federated Learning
Kichang Lee, Yujin Shin, Jonghyuk Yun, Jun Han, JeongGil Ko
TrojanRobot: Backdoor Attacks Against LLM-based Embodied Robots in the Physical World
Xianlong Wang, Hewen Pan, Hangtao Zhang, Minghui Li, Shengshan Hu, Ziqi Zhou, Lulu Xue, Peijin Guo, Yichen Wang, Wei Wan, Aishan Liu, Leo Yu Zhang
Reliable Poisoned Sample Detection against Backdoor Attacks Enhanced by Sharpness Aware Minimization
Mingda Zhang, Mingli Zhu, Zihao Zhu, Baoyuan Wu