Adversarial Example
Adversarial examples are subtly altered inputs designed to fool machine learning models, primarily deep neural networks (DNNs), into making incorrect predictions. Current research focuses on improving model robustness against these attacks, exploring techniques like ensemble methods, multi-objective representation learning, and adversarial training, often applied to architectures such as ResNets and Vision Transformers. Understanding and mitigating the threat of adversarial examples is crucial for ensuring the reliability and security of AI systems across diverse applications, from image classification and natural language processing to malware detection and autonomous driving. The development of robust defenses and effective attack detection methods remains a significant area of ongoing investigation.
Papers
MEAD: A Multi-Armed Approach for Evaluation of Adversarial Examples Detectors
Federica Granese, Marine Picot, Marco Romanelli, Francisco Messina, Pablo Piantanida
Detecting and Recovering Adversarial Examples from Extracting Non-robust and Highly Predictive Adversarial Perturbations
Mingyu Dong, Jiahao Chen, Diqun Yan, Jingxing Gao, Li Dong, Rangding Wang
Empirical Evaluation of Physical Adversarial Patch Attacks Against Overhead Object Detection Models
Gavin S. Hartnett, Li Ang Zhang, Caolionn O'Connell, Andrew J. Lohn, Jair Aguirre
Defense against adversarial attacks on deep convolutional neural networks through nonlocal denoising
Sandhya Aneja, Nagender Aneja, Pg Emeroylariffion Abas, Abdul Ghani Naim
RSTAM: An Effective Black-Box Impersonation Attack on Face Recognition using a Mobile and Compact Printer
Xiaoliang Liu, Furao Shen, Jian Zhao, Changhai Nie