Composite Attack
Composite attacks, encompassing multiple simultaneous attacks targeting machine learning models, are a growing area of research focusing on understanding and mitigating their effectiveness. Current work investigates various attack types, including combinations of backdoor attacks, data poisoning, adversarial examples (e.g., ℓ∞ and spatial perturbations), and instruction manipulation in large language models (LLMs), often employing generative adversarial networks (GANs) or multi-fidelity evaluation methods for attack generation and defense. Understanding and defending against these composite attacks is crucial for ensuring the security and reliability of machine learning systems across diverse applications, from image recognition to natural language processing.
Papers
Data-Driven Leader-following Consensus for Nonlinear Multi-Agent Systems against Composite Attacks: A Twins Layer Approach
Xin Gong, Jintao Peng, Dong Yang, Zhan Shu, Tingwen Huang, Yukang Cui
Resilient Output Containment Control of Heterogeneous Multiagent Systems Against Composite Attacks: A Digital Twin Approach
Yukang Cui, Lingbo Cao, Michael V. Basin, Jun Shen, Tingwen Huang, Xin Gong