Adversarial Attack
Adversarial attacks aim to deceive machine learning models by subtly altering input data, causing misclassifications or other erroneous outputs. Current research focuses on developing more robust models and detection methods, exploring various attack strategies across different model architectures (including vision transformers, recurrent neural networks, and graph neural networks) and data types (images, text, signals, and tabular data). Understanding and mitigating these attacks is crucial for ensuring the reliability and security of AI systems in diverse applications, from autonomous vehicles to medical diagnosis and cybersecurity.
Papers
Robust Semantic Segmentation: Strong Adversarial Attacks and Fast Training of Robust Models
Francesco Croce, Naman D Singh, Matthias Hein
Conditional Generators for Limit Order Book Environments: Explainability, Challenges, and Robustness
Andrea Coletta, Joseph Jerome, Rahul Savani, Svitlana Vyetrenko
Towards quantum enhanced adversarial robustness in machine learning
Maxwell T. West, Shu-Lok Tsang, Jia S. Low, Charles D. Hill, Christopher Leckie, Lloyd C. L. Hollenberg, Sarah M. Erfani, Muhammad Usman
Adversarial Attacks Neutralization via Data Set Randomization
Mouna Rabhi, Roberto Di Pietro
Sample Attackability in Natural Language Adversarial Attacks
Vyas Raina, Mark Gales
Physics-constrained Attack against Convolution-based Human Motion Prediction
Chengxu Duan, Zhicheng Zhang, Xiaoli Liu, Yonghao Dang, Jianqin Yin
A Unified Framework of Graph Information Bottleneck for Robustness and Membership Privacy
Enyan Dai, Limeng Cui, Zhengyang Wang, Xianfeng Tang, Yinghan Wang, Monica Cheng, Bing Yin, Suhang Wang
A Relaxed Optimization Approach for Adversarial Attacks against Neural Machine Translation Models
Sahar Sadrizadeh, Clément Barbier, Ljiljana Dolamic, Pascal Frossard
X-Detect: Explainable Adversarial Patch Detection for Object Detectors in Retail
Omer Hofman, Amit Giloni, Yarin Hayun, Ikuya Morikawa, Toshiya Shimizu, Yuval Elovici, Asaf Shabtai
Finite Gaussian Neurons: Defending against adversarial attacks by making neural networks say "I don't know"
Felix Grezes
Area is all you need: repeatable elements make stronger adversarial attacks
Dillon Niederhut
Malafide: a novel adversarial convolutive noise attack against deepfake and spoofing detection systems
Michele Panariello, Wanying Ge, Hemlata Tak, Massimiliano Todisco, Nicholas Evans
Revisiting and Advancing Adversarial Training Through A Simple Baseline
Hong Liu
Adversarial Attacks on the Interpretation of Neuron Activation Maximization
Geraldin Nanfack, Alexander Fulleringer, Jonathan Marty, Michael Eickenberg, Eugene Belilovsky
How robust accuracy suffers from certified training with convex relaxations
Piersilvio De Bartolomeis, Jacob Clarysse, Amartya Sanyal, Fanny Yang