Adversarial Example
Adversarial examples are subtly altered inputs designed to fool machine learning models, primarily deep neural networks (DNNs), into making incorrect predictions. Current research focuses on improving model robustness against these attacks, exploring techniques like ensemble methods, multi-objective representation learning, and adversarial training, often applied to architectures such as ResNets and Vision Transformers. Understanding and mitigating the threat of adversarial examples is crucial for ensuring the reliability and security of AI systems across diverse applications, from image classification and natural language processing to malware detection and autonomous driving. The development of robust defenses and effective attack detection methods remains a significant area of ongoing investigation.
Papers
On Feasibility of Intent Obfuscating Attacks
Zhaobin Li, Patrick Shafto
Towards Robust Vision Transformer via Masked Adaptive Ensemble
Fudong Lin, Jiadong Lou, Xu Yuan, Nian-Feng Tzeng
Improving Fast Adversarial Training Paradigm: An Example Taxonomy Perspective
Jie Gui, Chengze Jiang, Minjing Dong, Kun Tong, Xinli Shi, Yuan Yan Tang, Dacheng Tao
Preventing Catastrophic Overfitting in Fast Adversarial Training: A Bi-level Optimization Perspective
Zhaoxin Wang, Handing Wang, Cong Tian, Yaochu Jin
Any Target Can be Offense: Adversarial Example Generation via Generalized Latent Infection
Youheng Sun, Shengming Yuan, Xuanhan Wang, Lianli Gao, Jingkuan Song
A Hybrid Training-time and Run-time Defense Against Adversarial Attacks in Modulation Classification
Lu Zhang, Sangarapillai Lambotharan, Gan Zheng, Guisheng Liao, Ambra Demontis, Fabio Roli
Countermeasures Against Adversarial Examples in Radio Signal Classification
Lu Zhang, Sangarapillai Lambotharan, Gan Zheng, Basil AsSadhan, Fabio Roli
Improving the Transferability of Adversarial Examples by Feature Augmentation
Donghua Wang, Wen Yao, Tingsong Jiang, Xiaohu Zheng, Junqi Wu, Xiaoqian Chen