Adversarial Example
Adversarial examples are subtly altered inputs designed to fool machine learning models, primarily deep neural networks (DNNs), into making incorrect predictions. Current research focuses on improving model robustness against these attacks, exploring techniques like ensemble methods, multi-objective representation learning, and adversarial training, often applied to architectures such as ResNets and Vision Transformers. Understanding and mitigating the threat of adversarial examples is crucial for ensuring the reliability and security of AI systems across diverse applications, from image classification and natural language processing to malware detection and autonomous driving. The development of robust defenses and effective attack detection methods remains a significant area of ongoing investigation.
Papers
OMG-ATTACK: Self-Supervised On-Manifold Generation of Transferable Evasion Attacks
Ofir Bar Tal, Adi Haviv, Amit H. Bermano
Adversarial Machine Learning for Social Good: Reframing the Adversary as an Ally
Shawqi Al-Maliki, Adnan Qayyum, Hassan Ali, Mohamed Abdallah, Junaid Qadir, Dinh Thai Hoang, Dusit Niyato, Ala Al-Fuqaha
Enhancing Robust Representation in Adversarial Training: Alignment and Exclusion Criteria
Nuoyan Zhou, Nannan Wang, Decheng Liu, Dawei Zhou, Xinbo Gao
An Integrated Algorithm for Robust and Imperceptible Audio Adversarial Examples
Armin Ettenhofer, Jan-Philipp Schulze, Karla Pizzi
A Survey of Robustness and Safety of 2D and 3D Deep Learning Models Against Adversarial Attacks
Yanjie Li, Bin Xie, Songtao Guo, Yuanyuan Yang, Bin Xiao
Understanding the Robustness of Randomized Feature Defense Against Query-Based Adversarial Attacks
Quang H. Nguyen, Yingjie Lao, Tung Pham, Kok-Seng Wong, Khoa D. Doan