Adversarial Example
Adversarial examples are subtly altered inputs designed to fool machine learning models, primarily deep neural networks (DNNs), into making incorrect predictions. Current research focuses on improving model robustness against these attacks, exploring techniques like ensemble methods, multi-objective representation learning, and adversarial training, often applied to architectures such as ResNets and Vision Transformers. Understanding and mitigating the threat of adversarial examples is crucial for ensuring the reliability and security of AI systems across diverse applications, from image classification and natural language processing to malware detection and autonomous driving. The development of robust defenses and effective attack detection methods remains a significant area of ongoing investigation.
Papers
What Learned Representations and Influence Functions Can Tell Us About Adversarial Examples
Shakila Mahjabin Tonni, Mark Dras
Adversarial Attacks Against Uncertainty Quantification
Emanuele Ledda, Daniele Angioni, Giorgio Piras, Giorgio Fumera, Battista Biggio, Fabio Roli
Transferable Adversarial Attack on Image Tampering Localization
Yuqi Wang, Gang Cao, Zijie Lou, Haochen Zhu
Hardening RGB-D Object Recognition Systems against Adversarial Patch Attacks
Yang Zheng, Luca Demetrio, Antonio Emanuele CinĂ , Xiaoyi Feng, Zhaoqiang Xia, Xiaoyue Jiang, Ambra Demontis, Battista Biggio, Fabio Roli
Mitigating Adversarial Attacks in Federated Learning with Trusted Execution Environments
Simon Queyrut, Valerio Schiavoni, Pascal Felber