Adversarial Example
Adversarial examples are subtly altered inputs designed to fool machine learning models, primarily deep neural networks (DNNs), into making incorrect predictions. Current research focuses on improving model robustness against these attacks, exploring techniques like ensemble methods, multi-objective representation learning, and adversarial training, often applied to architectures such as ResNets and Vision Transformers. Understanding and mitigating the threat of adversarial examples is crucial for ensuring the reliability and security of AI systems across diverse applications, from image classification and natural language processing to malware detection and autonomous driving. The development of robust defenses and effective attack detection methods remains a significant area of ongoing investigation.
Papers
Demystifying the Adversarial Robustness of Random Transformation Defenses
Chawin Sitawarin, Zachary Golan-Strieb, David Wagner
Comment on Transferability and Input Transformation with Additive Noise
Hoki Kim, Jinseong Park, Jaewook Lee
Adversarial Robustness is at Odds with Lazy Training
Yunjuan Wang, Enayat Ullah, Poorya Mianjy, Raman Arora
Detecting Adversarial Examples in Batches -- a geometrical approach
Danush Kumar Venkatesh, Peter Steinbach
Minimum Noticeable Difference based Adversarial Privacy Preserving Image Generation
Wen Sun, Jian Jin, Weisi Lin
Query-Efficient and Scalable Black-Box Adversarial Attacks on Discrete Sequential Data via Bayesian Optimization
Deokjae Lee, Seungyong Moon, Junhyeok Lee, Hyun Oh Song
Morphence-2.0: Evasion-Resilient Moving Target Defense Powered by Out-of-Distribution Detection
Abderrahmen Amich, Ata Kaboudi, Birhanu Eshete
Fast and Reliable Evaluation of Adversarial Robustness with Minimum-Margin Attack
Ruize Gao, Jiongxiao Wang, Kaiwen Zhou, Feng Liu, Binghui Xie, Gang Niu, Bo Han, James Cheng
ReFace: Real-time Adversarial Attacks on Face Recognition Systems
Shehzeen Hussain, Todd Huster, Chris Mesterharm, Paarth Neekhara, Kevin An, Malhar Jere, Harshvardhan Sikka, Farinaz Koushanfar
Early Transferability of Adversarial Examples in Deep Neural Networks
Oriel BenShmuel
Meet You Halfway: Explaining Deep Learning Mysteries
Oriel BenShmuel
CARLA-GeAR: a Dataset Generator for a Systematic Evaluation of Adversarial Robustness of Vision Models
Federico Nesti, Giulio Rossolini, Gianluca D'Amico, Alessandro Biondi, Giorgio Buttazzo
Adversarial Noises Are Linearly Separable for (Nearly) Random Neural Networks
Huishuai Zhang, Da Yu, Yiping Lu, Di He