Adversarial Example
Adversarial examples are subtly altered inputs designed to fool machine learning models, primarily deep neural networks (DNNs), into making incorrect predictions. Current research focuses on improving model robustness against these attacks, exploring techniques like ensemble methods, multi-objective representation learning, and adversarial training, often applied to architectures such as ResNets and Vision Transformers. Understanding and mitigating the threat of adversarial examples is crucial for ensuring the reliability and security of AI systems across diverse applications, from image classification and natural language processing to malware detection and autonomous driving. The development of robust defenses and effective attack detection methods remains a significant area of ongoing investigation.
Papers
MMAD-Purify: A Precision-Optimized Framework for Efficient and Scalable Multi-Modal Attacks
Xinxin Liu, Zhongliang Guo, Siyuan Huang, Chun Pong Lau
Golyadkin's Torment: Doppelgängers and Adversarial Vulnerability
George I. Kamberov
Boosting Imperceptibility of Stable Diffusion-based Adversarial Examples Generation with Momentum
Nashrah Haque, Xiang Li, Zhehui Chen, Yanzhao Wu, Lei Yu, Arun Iyengar, Wenqi Wei
New Paradigm of Adversarial Training: Breaking Inherent Trade-Off between Accuracy and Robustness via Dummy Classes
Yanyun Wang, Li Liu, Zi Liang, Qingqing Ye, Haibo Hu
DAT: Improving Adversarial Robustness via Generative Amplitude Mix-up in Frequency Domain
Fengpeng Li, Kemou Li, Haiwei Wu, Jinyu Tian, Jiantao Zhou
LOTOS: Layer-wise Orthogonalization for Training Robust Ensembles
Ali Ebrahimpour-Boroojeny, Hari Sundaram, Varun Chandrasekaran
AnyAttack: Towards Large-scale Self-supervised Generation of Targeted Adversarial Examples for Vision-Language Models
Jiaming Zhang, Junhong Ye, Xingjun Ma, Yige Li, Yunfan Yang, Jitao Sang, Dit-Yan Yeung