Adversarial Attack
Adversarial attacks aim to deceive machine learning models by subtly altering input data, causing misclassifications or other erroneous outputs. Current research focuses on developing more robust models and detection methods, exploring various attack strategies across different model architectures (including vision transformers, recurrent neural networks, and graph neural networks) and data types (images, text, signals, and tabular data). Understanding and mitigating these attacks is crucial for ensuring the reliability and security of AI systems in diverse applications, from autonomous vehicles to medical diagnosis and cybersecurity.
Papers
Outlier Robust Adversarial Training
Shu Hu, Zhenhuan Yang, Xin Wang, Yiming Ying, Siwei Lyu
DAD++: Improved Data-free Test Time Adversarial Defense
Gaurav Kumar Nayak, Inder Khatri, Shubham Randive, Ruchit Rawal, Anirban Chakraborty
Machine Translation Models Stand Strong in the Face of Adversarial Attacks
Pavel Burnyshev, Elizaveta Kostenok, Alexey Zaytsev
Counterfactual Explanations via Locally-guided Sequential Algorithmic Recourse
Edward A. Small, Jeffrey N. Clark, Christopher J. McWilliams, Kacper Sokol, Jeffrey Chan, Flora D. Salim, Raul Santos-Rodriguez
Adversarial attacks on hybrid classical-quantum Deep Learning models for Histopathological Cancer Detection
Biswaraj Baral, Reek Majumdar, Bhavika Bhalgamiya, Taposh Dutta Roy
Adversarially Robust Deep Learning with Optimal-Transport-Regularized Divergences
Jeremiah Birrell, Mohammadreza Ebrahimi
DiffDefense: Defending against Adversarial Attacks via Diffusion Models
Hondamunige Prasanna Silva, Lorenzo Seidenari, Alberto Del Bimbo
How adversarial attacks can disrupt seemingly stable accurate classifiers
Oliver J. Sutton, Qinghua Zhou, Ivan Y. Tyukin, Alexander N. Gorban, Alexander Bastounis, Desmond J. Higham
MathAttack: Attacking Large Language Models Towards Math Solving Ability
Zihao Zhou, Qiufeng Wang, Mingyu Jin, Jie Yao, Jianan Ye, Wei Liu, Wei Wang, Xiaowei Huang, Kaizhu Huang
Improving Visual Quality and Transferability of Adversarial Attacks on Face Recognition Simultaneously with Adversarial Restoration
Fengfan Zhou, Hefei Ling, Yuxuan Shi, Jiazhong Chen, Ping Li
Toward Defensive Letter Design
Rentaro Kataoka, Akisato Kimura, Seiichi Uchida
Robust and Efficient Interference Neural Networks for Defending Against Adversarial Attacks in ImageNet
Yunuo Xiong, Shujuan Liu, Hongwei Xiong
Turn Fake into Real: Adversarial Head Turn Attacks Against Deepfake Detection
Weijie Wang, Zhengyu Zhao, Nicu Sebe, Bruno Lepri
Dual Adversarial Resilience for Collaborating Robust Underwater Image Enhancement and Perception
Zengxi Zhang, Zhiying Jiang, Zeru Shi, Jinyuan Liu, Risheng Liu
Enhancing Infrared Small Target Detection Robustness with Bi-Level Adversarial Framework
Zhu Liu, Zihang Chen, Jinyuan Liu, Long Ma, Xin Fan, Risheng Liu