Adversarial Attack
Adversarial attacks aim to deceive machine learning models by subtly altering input data, causing misclassifications or other erroneous outputs. Current research focuses on developing more robust models and detection methods, exploring various attack strategies across different model architectures (including vision transformers, recurrent neural networks, and graph neural networks) and data types (images, text, signals, and tabular data). Understanding and mitigating these attacks is crucial for ensuring the reliability and security of AI systems in diverse applications, from autonomous vehicles to medical diagnosis and cybersecurity.
Papers
Towards Deep Learning Models Resistant to Transfer-based Adversarial Attacks via Data-centric Robust Learning
Yulong Yang, Chenhao Lin, Xiang Ji, Qiwei Tian, Qian Li, Hongshan Yang, Zhibo Wang, Chao Shen
AFLOW: Developing Adversarial Examples under Extremely Noise-limited Settings
Renyang Liu, Jinhong Zhang, Haoran Li, Jin Zhang, Yuanyu Wang, Wei Zhou
SCME: A Self-Contrastive Method for Data-free and Query-Limited Model Extraction Attack
Renyang Liu, Jinhong Zhang, Kwok-Yan Lam, Jun Zhao, Wei Zhou
Comparing the Robustness of Modern No-Reference Image- and Video-Quality Metrics to Adversarial Attacks
Anastasia Antsiferova, Khaled Abud, Aleksandr Gushchin, Ekaterina Shumitskaya, Sergey Lavrushkin, Dmitriy Vatolin
A Geometrical Approach to Evaluate the Adversarial Robustness of Deep Neural Networks
Yang Wang, Bo Dong, Ke Xu, Haiyin Piao, Yufei Ding, Baocai Yin, Xin Yang
Adversarial Robustness in Graph Neural Networks: A Hamiltonian Approach
Kai Zhao, Qiyu Kang, Yang Song, Rui She, Sijie Wang, Wee Peng Tay
Exploring adversarial attacks in federated learning for medical imaging
Erfan Darzi, Florian Dubost, N. M. Sijtsema, P. M. A van Ooijen
PAC-Bayesian Spectrally-Normalized Bounds for Adversarially Robust Generalization
Jiancong Xiao, Ruoyu Sun, Zhi- Quan Luo
AdvSV: An Over-the-Air Adversarial Attack Dataset for Speaker Verification
Li Wang, Jiaqi Li, Yuhao Luo, Jiahao Zheng, Lei Wang, Hao Li, Ke Xu, Chengfang Fang, Jie Shi, Zhizheng Wu
An Initial Investigation of Neural Replay Simulator for Over-the-Air Adversarial Perturbations to Automatic Speaker Verification
Jiaqi Li, Li Wang, Liumeng Xue, Lei Wang, Zhizheng Wu
GReAT: A Graph Regularized Adversarial Training Method
Samet Bayram, Kenneth Barner
Targeted Attack Improves Protection against Unauthorized Diffusion Customization
Boyang Zheng, Chumeng Liang, Xiaoyu Wu
VLATTACK: Multimodal Adversarial Attacks on Vision-Language Tasks via Pre-trained Models
Ziyi Yin, Muchao Ye, Tianrong Zhang, Tianyu Du, Jinguo Zhu, Han Liu, Jinghui Chen, Ting Wang, Fenglong Ma
Assessing Robustness via Score-Based Adversarial Image Generation
Marcel Kollovieh, Lukas Gosch, Yan Scholten, Marten Lienen, Stephan Günnemann
Kick Bad Guys Out! Conditionally Activated Anomaly Detection in Federated Learning with Zero-Knowledge Proof Verification
Shanshan Han, Wenxuan Wu, Baturalp Buyukates, Weizhao Jin, Qifan Zhang, Yuhang Yao, Salman Avestimehr, Chaoyang He