Adversarial Attack
Adversarial attacks aim to deceive machine learning models by subtly altering input data, causing misclassifications or other erroneous outputs. Current research focuses on developing more robust models and detection methods, exploring various attack strategies across different model architectures (including vision transformers, recurrent neural networks, and graph neural networks) and data types (images, text, signals, and tabular data). Understanding and mitigating these attacks is crucial for ensuring the reliability and security of AI systems in diverse applications, from autonomous vehicles to medical diagnosis and cybersecurity.
Papers
Understanding Robustness of Visual State Space Models for Image Classification
Chengbin Du, Yanxi Li, Chang Xu
Improving Adversarial Transferability of Visual-Language Pre-training Models through Collaborative Multimodal Interaction
Jiyuan Fu, Zhaoyu Chen, Kaixun Jiang, Haijing Guo, Jiafeng Wang, Shuyong Gao, Wenqiang Zhang
Benchmarking Zero-Shot Robustness of Multimodal Foundation Models: A Pilot Study
Chenguang Wang, Ruoxi Jia, Xin Liu, Dawn Song
Introducing Adaptive Continuous Adversarial Training (ACAT) to Enhance ML Robustness
Mohamed elShehaby, Aditya Kotha, Ashraf Matrawy
Revisiting Adversarial Training under Long-Tailed Distributions
Xinli Yue, Ningping Mou, Qian Wang, Lingchen Zhao
Robust Subgraph Learning by Monitoring Early Training Representations
Sepideh Neshatfar, Salimeh Yasaei Sekeh
Towards White Box Deep Learning
Maciej Satkiewicz
What Sketch Explainability Really Means for Downstream Tasks
Hmrishav Bandyopadhyay, Pinaki Nath Chowdhury, Ayan Kumar Bhunia, Aneeshan Sain, Tao Xiang, Yi-Zhe Song
Adversarial Fine-tuning of Compressed Neural Networks for Joint Improvement of Robustness and Efficiency
Hallgrimur Thorsteinsson, Valdemar J Henriksen, Tong Chen, Raghavendra Selvan
Adversarial Training with OCR Modality Perturbation for Scene-Text Visual Question Answering
Zhixuan Shen, Haonan Luo, Sijia Li, Tianrui Li
Soften to Defend: Towards Adversarial Robustness via Self-Guided Label Refinement
Daiwei Yu, Zhuorong Li, Lina Wei, Canghong Jin, Yun Zhang, Sixian Chan
The Impact of Quantization on the Robustness of Transformer-based Text Classifiers
Seyed Parsa Neshaei, Yasaman Boreshban, Gholamreza Ghassem-Sani, Seyed Abolghasem Mirroshandel
Hide in Thicket: Generating Imperceptible and Rational Adversarial Perturbations on 3D Point Clouds
Tianrui Lou, Xiaojun Jia, Jindong Gu, Li Liu, Siyuan Liang, Bangyan He, Xiaochun Cao
Exploring the Adversarial Frontier: Quantifying Robustness via Adversarial Hypervolume
Ping Guo, Cheng Gong, Xi Lin, Zhiyuan Yang, Qingfu Zhang
Defending Against Unforeseen Failure Modes with Latent Adversarial Training
Stephen Casper, Lennart Schulze, Oam Patel, Dylan Hadfield-Menell