Stealthy Backdoor Attack

Stealthy backdoor attacks exploit vulnerabilities in machine learning models, surreptitiously embedding malicious functionality triggered by seemingly innocuous inputs. Current research focuses on developing and analyzing these attacks across various model architectures, including deep neural networks (DNNs), spiking neural networks (SNNs), graph convolutional networks (GCNs), and large language models (LLMs), often within the context of federated learning. The ability to create highly effective and undetectable backdoors poses a significant threat to the security and reliability of AI systems in diverse applications, driving intense investigation into both attack methodologies and robust defense mechanisms. This research is crucial for ensuring the trustworthiness and safety of increasingly prevalent AI technologies.

Papers