Adversarial Example
Adversarial examples are subtly altered inputs designed to fool machine learning models, primarily deep neural networks (DNNs), into making incorrect predictions. Current research focuses on improving model robustness against these attacks, exploring techniques like ensemble methods, multi-objective representation learning, and adversarial training, often applied to architectures such as ResNets and Vision Transformers. Understanding and mitigating the threat of adversarial examples is crucial for ensuring the reliability and security of AI systems across diverse applications, from image classification and natural language processing to malware detection and autonomous driving. The development of robust defenses and effective attack detection methods remains a significant area of ongoing investigation.
Papers
Multi-step domain adaptation by adversarial attack to $\mathcal{H} \Delta \mathcal{H}$-divergence
Arip Asadulaev, Alexander Panfilov, Andrey Filchenkov
Easy Batch Normalization
Arip Asadulaev, Alexander Panfilov, Andrey Filchenkov
Prior-Guided Adversarial Initialization for Fast Adversarial Training
Xiaojun Jia, Yong Zhang, Xingxing Wei, Baoyuan Wu, Ke Ma, Jue Wang, Xiaochun Cao
Practical Attacks on Machine Learning: A Case Study on Adversarial Windows Malware
Luca Demetrio, Battista Biggio, Fabio Roli
Exploring Adversarial Examples and Adversarial Robustness of Convolutional Neural Networks by Mutual Information
Jiebao Zhang, Wenhua Qian, Rencan Nie, Jinde Cao, Dan Xu
Adversarial Robustness Assessment of NeuroEvolution Approaches
Inês Valentim, Nuno Lourenço, Nuno Antunes
Frequency Domain Model Augmentation for Adversarial Attack
Yuyang Long, Qilong Zhang, Boheng Zeng, Lianli Gao, Xianglong Liu, Jian Zhang, Jingkuan Song
Dynamic Time Warping based Adversarial Framework for Time-Series Domain
Taha Belkhouja, Yan Yan, Janardhan Rao Doppa
Adversarial Framework with Certified Robustness for Time-Series Domain via Statistical Features
Taha Belkhouja, Janardhan Rao Doppa
Jacobian Norm with Selective Input Gradient Regularization for Improved and Interpretable Adversarial Defense
Deyin Liu, Lin Wu, Haifeng Zhao, Farid Boussaid, Mohammed Bennamoun, Xianghua Xie