Adversarial Attack
Adversarial attacks aim to deceive machine learning models by subtly altering input data, causing misclassifications or other erroneous outputs. Current research focuses on developing more robust models and detection methods, exploring various attack strategies across different model architectures (including vision transformers, recurrent neural networks, and graph neural networks) and data types (images, text, signals, and tabular data). Understanding and mitigating these attacks is crucial for ensuring the reliability and security of AI systems in diverse applications, from autonomous vehicles to medical diagnosis and cybersecurity.
Papers
Mitigating Accuracy-Robustness Trade-off via Balanced Multi-Teacher Adversarial Distillation
Shiji Zhao, Xizhe Wang, Xingxing Wei
Distributional Modeling for Location-Aware Adversarial Patches
Xingxing Wei, Shouwei Ruan, Yinpeng Dong, Hang Su
Evaluating Similitude and Robustness of Deep Image Denoising Models via Adversarial Attack
Jie Ning, Jiebao Sun, Yao Li, Zhichang Guo, Wangmeng Zuo
Adversarial Backdoor Attack by Naturalistic Data Poisoning on Trajectory Prediction in Autonomous Driving
Mozhgan Pourkeshavarz, Mohammad Sabokrou, Amir Rasouli
Cooperation or Competition: Avoiding Player Domination for Multi-Target Robustness via Adaptive Budgets
Yimu Wang, Dinghuai Zhang, Yihan Wu, Heng Huang, Hongyang Zhang
Advancing Adversarial Training by Injecting Booster Signal
Hong Joo Lee, Youngjoon Yu, Yong Man Ro