Adversarial Example
Adversarial examples are subtly altered inputs designed to fool machine learning models, primarily deep neural networks (DNNs), into making incorrect predictions. Current research focuses on improving model robustness against these attacks, exploring techniques like ensemble methods, multi-objective representation learning, and adversarial training, often applied to architectures such as ResNets and Vision Transformers. Understanding and mitigating the threat of adversarial examples is crucial for ensuring the reliability and security of AI systems across diverse applications, from image classification and natural language processing to malware detection and autonomous driving. The development of robust defenses and effective attack detection methods remains a significant area of ongoing investigation.
Papers
Deep generative models as an adversarial attack strategy for tabular machine learning
Salijona Dyrmishi, Mihaela Cătălina Stoian, Eleonora Giunchiglia, Maxime Cordy
TEAM: Temporal Adversarial Examples Attack Model against Network Intrusion Detection System Applied to RNN
Ziyi Liu, Dengpan Ye, Long Tang, Yunming Zhang, Jiacheng Deng
Enhancing 3D Robotic Vision Robustness by Minimizing Adversarial Mutual Information through a Curriculum Training Approach
Nastaran Darabi, Dinithi Jayasuriya, Devashri Naik, Theja Tulabandhula, Amit Ranjan Trivedi
Input Space Mode Connectivity in Deep Neural Networks
Jakub Vrabel, Ori Shem-Ur, Yaron Oz, David Krueger
Adversarial Attacks on Data Attribution
Xinhe Wang, Pingbang Hu, Junwei Deng, Jiaqi W. Ma
Seeing Through the Mask: Rethinking Adversarial Examples for CAPTCHAs
Yahya Jabary, Andreas Plesner, Turlan Kuzhagaliyev, Roger Wattenhofer
Accurate Forgetting for All-in-One Image Restoration Model
Xin Su, Zhuoran Zheng
Comprehensive Botnet Detection by Mitigating Adversarial Attacks, Navigating the Subtleties of Perturbation Distances and Fortifying Predictions with Conformal Layers
Rahul Yumlembam, Biju Issac, Seibu Mary Jacob, Longzhi Yang