Adversarial Attack
Adversarial attacks aim to deceive machine learning models by subtly altering input data, causing misclassifications or other erroneous outputs. Current research focuses on developing more robust models and detection methods, exploring various attack strategies across different model architectures (including vision transformers, recurrent neural networks, and graph neural networks) and data types (images, text, signals, and tabular data). Understanding and mitigating these attacks is crucial for ensuring the reliability and security of AI systems in diverse applications, from autonomous vehicles to medical diagnosis and cybersecurity.
Papers
Enhance DNN Adversarial Robustness and Efficiency via Injecting Noise to Non-Essential Neurons
Zhenyu Liu, Garrett Gagnon, Swagath Venkataramani, Liu Liu
PAC-Bayesian Adversarially Robust Generalization Bounds for Graph Neural Network
Tan Sun, Junhong Lin
AttackNet: Enhancing Biometric Security via Tailored Convolutional Neural Network Architectures for Liveness Detection
Oleksandr Kuznetsov, Dmytro Zakharov, Emanuele Frontoni, Andrea Maranesi
FoolSDEdit: Deceptively Steering Your Edits Towards Targeted Attribute-aware Distribution
Qi Zhou, Dongxia Wang, Tianlin Li, Zhihong Xu, Yang Liu, Kui Ren, Wenhai Wang, Qing Guo
Partially Recentralization Softmax Loss for Vision-Language Models Robustness
Hao Wang, Jinzhe Jiang, Xin Zhang, Chen Li
PROSAC: Provably Safe Certification for Machine Learning Models under Adversarial Attacks
Chen Feng, Ziquan Liu, Zhuo Zhi, Ilija Bogunovic, Carsten Gerner-Beuerle, Miguel Rodrigues
DeSparsify: Adversarial Attack Against Token Sparsification Mechanisms in Vision Transformers
Oryan Yehezkel, Alon Zolfi, Amit Baras, Yuval Elovici, Asaf Shabtai
On the Multi-modal Vulnerability of Diffusion Models
Dingcheng Yang, Yang Bai, Xiaojun Jia, Yang Liu, Xiaochun Cao, Wenjian Yu
SignSGD with Federated Defense: Harnessing Adversarial Attacks through Gradient Sign Decoding
Chanho Park, Namyoon Lee
STAA-Net: A Sparse and Transferable Adversarial Attack for Speech Emotion Recognition
Yi Chang, Zhao Ren, Zixing Zhang, Xin Jing, Kun Qian, Xi Shao, Bin Hu, Tanja Schultz, Björn W. Schuller
Tropical Decision Boundaries for Neural Networks Are Robust Against Adversarial Attacks
Kurt Pasque, Christopher Teska, Ruriko Yoshida, Keiji Miura, Jefferson Huang
Benchmarking Transferable Adversarial Attacks
Zhibo Jin, Jiayu Zhang, Zhiyu Zhu, Huaming Chen
Invariance-powered Trustworthy Defense via Remove Then Restore
Xiaowei Fu, Yuhang Zhou, Lina Ma, Lei Zhang