Adversarial Attack
Adversarial attacks aim to deceive machine learning models by subtly altering input data, causing misclassifications or other erroneous outputs. Current research focuses on developing more robust models and detection methods, exploring various attack strategies across different model architectures (including vision transformers, recurrent neural networks, and graph neural networks) and data types (images, text, signals, and tabular data). Understanding and mitigating these attacks is crucial for ensuring the reliability and security of AI systems in diverse applications, from autonomous vehicles to medical diagnosis and cybersecurity.
Papers
Raze to the Ground: Query-Efficient Adversarial HTML Attacks on Machine-Learning Phishing Webpage Detectors
Biagio Montaruli, Luca Demetrio, Maura Pintor, Luca Compagna, Davide Balzarotti, Battista Biggio
Optimizing Key-Selection for Face-based One-Time Biometrics via Morphing
Daile Osorio-Roig, Mahdi Ghafourian, Christian Rathgeb, Ruben Vera-Rodriguez, Christoph Busch, Julian Fierrez
LoFT: Local Proxy Fine-tuning For Improving Transferability Of Adversarial Attacks Against Large Language Model
Muhammad Ahmed Shah, Roshan Sharma, Hira Dhamyal, Raphael Olivier, Ankit Shah, Joseph Konan, Dareen Alharthi, Hazim T Bukhari, Massa Baali, Soham Deshmukh, Michael Kuhlmann, Bhiksha Raj, Rita Singh
Adversarial Client Detection via Non-parametric Subspace Monitoring in the Internet of Federated Things
Xianjian Xie, Xiaochen Xian, Dan Li, Andi Wang
Fooling the Textual Fooler via Randomizing Latent Representations
Duy C. Hoang, Quang H. Nguyen, Saurav Manchanda, MinLong Peng, Kok-Seng Wong, Khoa D. Doan
Counterfactual Image Generation for adversarially robust and interpretable Classifiers
Rafael Bischof, Florian Scheidegger, Michael A. Kraus, A. Cristiano I. Malossi
A Survey of Robustness and Safety of 2D and 3D Deep Learning Models Against Adversarial Attacks
Yanjie Li, Bin Xie, Songtao Guo, Yuanyuan Yang, Bin Xiao
Understanding the Robustness of Randomized Feature Defense Against Query-Based Adversarial Attacks
Quang H. Nguyen, Yingjie Lao, Tung Pham, Kok-Seng Wong, Khoa D. Doan
Adversarial Machine Learning in Latent Representations of Neural Networks
Milin Zhang, Mohammad Abdi, Francesco Restuccia
Intrinsic Biologically Plausible Adversarial Robustness
Matilde Tristany Farinha, Thomas Ortner, Giorgia Dellaferrera, Benjamin Grewe, Angeliki Pantazi
On Continuity of Robust and Accurate Classifiers
Ramin Barati, Reza Safabakhsh, Mohammad Rahmati
Investigating Human-Identifiable Features Hidden in Adversarial Perturbations
Dennis Y. Menn, Tzu-hsun Feng, Sriram Vishwanath, Hung-yi Lee
Robust Offline Reinforcement Learning -- Certify the Confidence Interval
Jiarui Yao, Simon Shaolei Du
Parameter-Saving Adversarial Training: Reinforcing Multi-Perturbation Robustness via Hypernetworks
Huihui Gong, Minjing Dong, Siqi Ma, Seyit Camtepe, Surya Nepal, Chang Xu
Adversarial Attacks on Video Object Segmentation with Hard Region Discovery
Ping Li, Yu Zhang, Li Yuan, Jian Zhao, Xianghua Xu, Xiaoqin Zhang
On the Effectiveness of Adversarial Samples against Ensemble Learning-based Windows PE Malware Detectors
Trong-Nghia To, Danh Le Kim, Do Thi Thu Hien, Nghi Hoang Khoa, Hien Do Hoang, Phan The Duy, Van-Hau Pham