Adversarial Attack
Adversarial attacks aim to deceive machine learning models by subtly altering input data, causing misclassifications or other erroneous outputs. Current research focuses on developing more robust models and detection methods, exploring various attack strategies across different model architectures (including vision transformers, recurrent neural networks, and graph neural networks) and data types (images, text, signals, and tabular data). Understanding and mitigating these attacks is crucial for ensuring the reliability and security of AI systems in diverse applications, from autonomous vehicles to medical diagnosis and cybersecurity.
Papers
DistriBlock: Identifying adversarial audio samples by leveraging characteristics of the output distribution
Matías P. Pizarro B., Dorothea Kolossa, Asja Fischer
Trust-Aware Resilient Control and Coordination of Connected and Automated Vehicles
H M Sabbir Ahmad, Ehsan Sabouni, Wei Xiao, Christos G. Cassandras, Wenchao Li
Don't Retrain, Just Rewrite: Countering Adversarial Perturbations by Rewriting Text
Ashim Gupta, Carter Wood Blum, Temma Choji, Yingjie Fei, Shalin Shah, Alakananda Vempala, Vivek Srikumar
Adversarial Attacks on Leakage Detectors in Water Distribution Networks
Paul Stahlhofen, André Artelt, Luca Hermes, Barbara Hammer
IDEA: Invariant Defense for Graph Adversarial Robustness
Shuchang Tao, Qi Cao, Huawei Shen, Yunfan Wu, Bingbing Xu, Xueqi Cheng
PEARL: Preprocessing Enhanced Adversarial Robust Learning of Image Deraining for Semantic Segmentation
Xianghao Jiao, Yaohua Liu, Jiaxin Gao, Xinyuan Chu, Risheng Liu, Xin Fan
How do humans perceive adversarial text? A reality check on the validity and naturalness of word-based adversarial attacks
Salijona Dyrmishi, Salah Ghamizi, Maxime Cordy
Relating Implicit Bias and Adversarial Attacks through Intrinsic Dimension
Lorenzo Basile, Nikos Karantzas, Alberto D'Onofrio, Luca Bortolussi, Alex Rodriguez, Fabio Anselmi
AdvFunMatch: When Consistent Teaching Meets Adversarial Robustness
Zihui Wu, Haichang Gao, Bingqian Zhou, Ping Wang
The Best Defense is a Good Offense: Adversarial Augmentation against Adversarial Attacks
Iuri Frosio, Jan Kautz
QFA2SR: Query-Free Adversarial Transfer Attacks to Speaker Recognition Systems
Guangke Chen, Yedi Zhang, Zhe Zhao, Fu Song
Expressive Losses for Verified Robustness via Convex Combinations
Alessandro De Palma, Rudy Bunel, Krishnamurthy Dvijotham, M. Pawan Kumar, Robert Stanforth, Alessio Lomuscio
Enhancing Accuracy and Robustness through Adversarial Training in Class Incremental Continual Learning
Minchan Kwon, Kangil Kim
DiffProtect: Generate Adversarial Examples with Diffusion Models for Facial Privacy Protection
Jiang Liu, Chun Pong Lau, Rama Chellappa
Adversarial Nibbler: A Data-Centric Challenge for Improving the Safety of Text-to-Image Models
Alicia Parrish, Hannah Rose Kirk, Jessica Quaye, Charvi Rastogi, Max Bartolo, Oana Inel, Juan Ciro, Rafael Mosquera, Addison Howard, Will Cukierski, D. Sculley, Vijay Janapa Reddi, Lora Aroyo
Latent Magic: An Investigation into Adversarial Examples Crafted in the Semantic Latent Space
BoYang Zheng
Towards Benchmarking and Assessing Visual Naturalness of Physical World Adversarial Attacks
Simin Li, Shuing Zhang, Gujun Chen, Dong Wang, Pu Feng, Jiakai Wang, Aishan Liu, Xin Yi, Xianglong Liu
Flying Adversarial Patches: Manipulating the Behavior of Deep Learning-based Autonomous Multirotors
Pia Hanfeld, Marina M. -C. Höhne, Michael Bussmann, Wolfgang Hönig
Uncertainty-based Detection of Adversarial Attacks in Semantic Segmentation
Kira Maag, Asja Fischer
The defender's perspective on automatic speaker verification: An overview
Haibin Wu, Jiawen Kang, Lingwei Meng, Helen Meng, Hung-yi Lee