Adversarial Attack
Adversarial attacks aim to deceive machine learning models by subtly altering input data, causing misclassifications or other erroneous outputs. Current research focuses on developing more robust models and detection methods, exploring various attack strategies across different model architectures (including vision transformers, recurrent neural networks, and graph neural networks) and data types (images, text, signals, and tabular data). Understanding and mitigating these attacks is crucial for ensuring the reliability and security of AI systems in diverse applications, from autonomous vehicles to medical diagnosis and cybersecurity.
Papers
Adversarial Robustness in Unsupervised Machine Learning: A Systematic Review
Mathias Lundteigen Mohus, Jinyue Li
Byzantine-Robust Clustered Federated Learning
Zhixu Tao, Kun Yang, Sanjeev R. Kulkarni
Adversarial-Aware Deep Learning System based on a Secondary Classical Machine Learning Verification Approach
Mohammed Alkhowaiter, Hisham Kholidy, Mnassar Alyami, Abdulmajeed Alghamdi, Cliff Zou
Deception by Omission: Using Adversarial Missingness to Poison Causal Structure Learning
Deniz Koyuncu, Alex Gittens, Bülent Yener, Moti Yung
Graph-based methods coupled with specific distributional distances for adversarial attack detection
Dwight Nwaigwe, Lucrezia Carboni, Martial Mermillod, Sophie Achard, Michel Dojat
Red Teaming Language Model Detectors with Language Models
Zhouxing Shi, Yihan Wang, Fan Yin, Xiangning Chen, Kai-Wei Chang, Cho-Jui Hsieh
Exploring the Vulnerabilities of Machine Learning and Quantum Machine Learning to Adversarial Attacks using a Malware Dataset: A Comparative Analysis
Mst Shapna Akter, Hossain Shahriar, Iysa Iqbal, MD Hossain, M. A. Karim, Victor Clincy, Razvan Voicu
UMD: Unsupervised Model Detection for X2X Backdoor Attacks
Zhen Xiang, Zidi Xiong, Bo Li
Exploiting Explainability to Design Adversarial Attacks and Evaluate Attack Resilience in Hate-Speech Detection Models
Pranath Reddy Kumbam, Sohaib Uddin Syed, Prashanth Thamminedi, Suhas Harish, Ian Perera, Bonnie J. Dorr
From Adversarial Arms Race to Model-centric Evaluation: Motivating a Unified Automatic Robustness Evaluation Framework
Yangyi Chen, Hongcheng Gao, Ganqu Cui, Lifan Yuan, Dehan Kong, Hanlu Wu, Ning Shi, Bo Yuan, Longtao Huang, Hui Xue, Zhiyuan Liu, Maosong Sun, Heng Ji
Fourier Analysis on Robustness of Graph Convolutional Neural Networks for Skeleton-based Action Recognition
Nariki Tanaka, Hiroshi Kera, Kazuhiko Kawamoto
Membership Inference Attacks against Language Models via Neighbourhood Comparison
Justus Mattern, Fatemehsadat Mireshghallah, Zhijing Jin, Bernhard Schölkopf, Mrinmaya Sachan, Taylor Berg-Kirkpatrick
Two Heads are Better than One: Towards Better Adversarial Robustness by Combining Transduction and Rejection
Nils Palumbo, Yang Guo, Xi Wu, Jiefeng Chen, Yingyu Liang, Somesh Jha
Adversarial Attack On Yolov5 For Traffic And Road Sign Detection
Sanyam Jain
Modeling Adversarial Attack on Pre-trained Language Models as Sequential Decision Making
Xuanjie Fang, Sijie Cheng, Yang Liu, Wei Wang
On the Importance of Backbone to the Adversarial Robustness of Object Detectors
Xiao Li, Hang Chen, Xiaolin Hu
Rapid Plug-in Defenders
Kai Wu, Yujian Betterest Li, Jian Lou, Xiaoyu Zhang, Handing Wang, Jing Liu
Rethinking Adversarial Policies: A Generalized Attack Formulation and Provable Defense in RL
Xiangyu Liu, Souradip Chakraborty, Yanchao Sun, Furong Huang