Adversarial Attack
Adversarial attacks aim to deceive machine learning models by subtly altering input data, causing misclassifications or other erroneous outputs. Current research focuses on developing more robust models and detection methods, exploring various attack strategies across different model architectures (including vision transformers, recurrent neural networks, and graph neural networks) and data types (images, text, signals, and tabular data). Understanding and mitigating these attacks is crucial for ensuring the reliability and security of AI systems in diverse applications, from autonomous vehicles to medical diagnosis and cybersecurity.
Papers
Familiarity-Based Open-Set Recognition Under Adversarial Attacks
Philip Enevoldsen, Christian Gundersen, Nico Lang, Serge Belongie, Christian Igel
Be Careful When Evaluating Explanations Regarding Ground Truth
Hubert Baniecki, Maciej Chrabaszcz, Andreas Holzinger, Bastian Pfeifer, Anna Saranti, Przemyslaw Biecek
Optimal Cost Constrained Adversarial Attacks For Multiple Agent Systems
Ziqing Lu, Guanlin Liu, Lifeng Lai, Weiyu Xu
Robustness Tests for Automatic Machine Translation Metrics with Adversarial Attacks
Yichen Huang, Timothy Baldwin
Improving Robustness for Vision Transformer with a Simple Dynamic Scanning Augmentation
Shashank Kotyan, Danilo Vasconcellos Vargas
NEO-KD: Knowledge-Distillation-Based Adversarial Training for Robust Multi-Exit Neural Networks
Seokil Ham, Jungwuk Park, Dong-Jun Han, Jaekyun Moon
Adversarial Examples in the Physical World: A Survey
Jiakai Wang, Donghua Wang, Jin Hu, Siyang Wu, Tingsong Jiang, Wen Yao, Aishan Liu, Xianglong Liu
Magmaw: Modality-Agnostic Adversarial Attacks on Machine Learning-Based Wireless Communication Systems
Jung-Woo Chang, Ke Sun, Nasimeh Heydaribeni, Seira Hidano, Xinyu Zhang, Farinaz Koushanfar