Adversarial Attack
Adversarial attacks aim to deceive machine learning models by subtly altering input data, causing misclassifications or other erroneous outputs. Current research focuses on developing more robust models and detection methods, exploring various attack strategies across different model architectures (including vision transformers, recurrent neural networks, and graph neural networks) and data types (images, text, signals, and tabular data). Understanding and mitigating these attacks is crucial for ensuring the reliability and security of AI systems in diverse applications, from autonomous vehicles to medical diagnosis and cybersecurity.
Papers
Comparing the Robustness of Modern No-Reference Image- and Video-Quality Metrics to Adversarial Attacks
Anastasia Antsiferova, Khaled Abud, Aleksandr Gushchin, Ekaterina Shumitskaya, Sergey Lavrushkin, Dmitriy Vatolin
A Geometrical Approach to Evaluate the Adversarial Robustness of Deep Neural Networks
Yang Wang, Bo Dong, Ke Xu, Haiyin Piao, Yufei Ding, Baocai Yin, Xin Yang
Adversarial Robustness in Graph Neural Networks: A Hamiltonian Approach
Kai Zhao, Qiyu Kang, Yang Song, Rui She, Sijie Wang, Wee Peng Tay
Exploring adversarial attacks in federated learning for medical imaging
Erfan Darzi, Florian Dubost, N. M. Sijtsema, P. M. A van Ooijen
PAC-Bayesian Spectrally-Normalized Bounds for Adversarially Robust Generalization
Jiancong Xiao, Ruoyu Sun, Zhi- Quan Luo
AdvSV: An Over-the-Air Adversarial Attack Dataset for Speaker Verification
Li Wang, Jiaqi Li, Yuhao Luo, Jiahao Zheng, Lei Wang, Hao Li, Ke Xu, Chengfang Fang, Jie Shi, Zhizheng Wu
An Initial Investigation of Neural Replay Simulator for Over-the-Air Adversarial Perturbations to Automatic Speaker Verification
Jiaqi Li, Li Wang, Liumeng Xue, Lei Wang, Zhizheng Wu
GReAT: A Graph Regularized Adversarial Training Method
Samet Bayram, Kenneth Barner
Targeted Attack Improves Protection against Unauthorized Diffusion Customization
Boyang Zheng, Chumeng Liang, Xiaoyu Wu
VLATTACK: Multimodal Adversarial Attacks on Vision-Language Tasks via Pre-trained Models
Ziyi Yin, Muchao Ye, Tianrong Zhang, Tianyu Du, Jinguo Zhu, Han Liu, Jinghui Chen, Ting Wang, Fenglong Ma
Assessing Robustness via Score-Based Adversarial Image Generation
Marcel Kollovieh, Lukas Gosch, Yan Scholten, Marten Lienen, Stephan Günnemann
Kick Bad Guys Out! Conditionally Activated Anomaly Detection in Federated Learning with Zero-Knowledge Proof Verification
Shanshan Han, Wenxuan Wu, Baturalp Buyukates, Weizhao Jin, Qifan Zhang, Yuhang Yao, Salman Avestimehr, Chaoyang He
Improving classifier decision boundaries using nearest neighbors
Johannes Schneider
Targeted Adversarial Attacks on Generalizable Neural Radiance Fields
Andras Horvath, Csaba M. Jozsa
Untargeted White-box Adversarial Attack with Heuristic Defence Methods in Real-time Deep Learning based Network Intrusion Detection System
Khushnaseeb Roshan, Aasim Zafar, Sheikh Burhan Ul Haque
Raze to the Ground: Query-Efficient Adversarial HTML Attacks on Machine-Learning Phishing Webpage Detectors
Biagio Montaruli, Luca Demetrio, Maura Pintor, Luca Compagna, Davide Balzarotti, Battista Biggio
Optimizing Key-Selection for Face-based One-Time Biometrics via Morphing
Daile Osorio-Roig, Mahdi Ghafourian, Christian Rathgeb, Ruben Vera-Rodriguez, Christoph Busch, Julian Fierrez