Adversarial Attack
Adversarial attacks aim to deceive machine learning models by subtly altering input data, causing misclassifications or other erroneous outputs. Current research focuses on developing more robust models and detection methods, exploring various attack strategies across different model architectures (including vision transformers, recurrent neural networks, and graph neural networks) and data types (images, text, signals, and tabular data). Understanding and mitigating these attacks is crucial for ensuring the reliability and security of AI systems in diverse applications, from autonomous vehicles to medical diagnosis and cybersecurity.
1813papers
Papers - Page 70
May 29, 2023
UMD: Unsupervised Model Detection for X2X Backdoor Attacks
Zhen Xiang, Zidi Xiong, Bo LiExploiting Explainability to Design Adversarial Attacks and Evaluate Attack Resilience in Hate-Speech Detection Models
Pranath Reddy Kumbam, Sohaib Uddin Syed, Prashanth Thamminedi, Suhas Harish, Ian Perera, Bonnie J. DorrFrom Adversarial Arms Race to Model-centric Evaluation: Motivating a Unified Automatic Robustness Evaluation Framework
Yangyi Chen, Hongcheng Gao, Ganqu Cui, Lifan Yuan, Dehan Kong, Hanlu Wu, Ning Shi, Bo Yuan, Longtao Huang, Hui Xue, Zhiyuan Liu, Maosong Sun+1Fourier Analysis on Robustness of Graph Convolutional Neural Networks for Skeleton-based Action Recognition
Nariki Tanaka, Hiroshi Kera, Kazuhiko KawamotoMembership Inference Attacks against Language Models via Neighbourhood Comparison
Justus Mattern, Fatemehsadat Mireshghallah, Zhijing Jin, Bernhard Schölkopf, Mrinmaya Sachan, Taylor Berg-Kirkpatrick
May 27, 2023
Two Heads are Better than One: Towards Better Adversarial Robustness by Combining Transduction and Rejection
Nils Palumbo, Yang Guo, Xi Wu, Jiefeng Chen, Yingyu Liang, Somesh JhaAdversarial Attack On Yolov5 For Traffic And Road Sign Detection
Sanyam JainModeling Adversarial Attack on Pre-trained Language Models as Sequential Decision Making
Xuanjie Fang, Sijie Cheng, Yang Liu, Wei WangOn the Importance of Backbone to the Adversarial Robustness of Object Detectors
Xiao Li, Hang Chen, Xiaolin HuRapid Plug-in Defenders
Kai Wu, Yujian Betterest Li, Jian Lou, Xiaoyu Zhang, Handing Wang, Jing LiuRethinking Adversarial Policies: A Generalized Attack Formulation and Provable Defense in RL
Xiangyu Liu, Souradip Chakraborty, Yanchao Sun, Furong Huang
May 26, 2023
Adversarial Attacks on Online Learning to Rank with Click Feedback
Jinhang Zuo, Zhiyao Zhang, Zhiyong Wang, Shuai Li, Mohammad Hajiesmaili, Adam WiermanDistriBlock: Identifying adversarial audio samples by leveraging characteristics of the output distribution
Matías P. Pizarro B., Dorothea Kolossa, Asja FischerTrust-Aware Resilient Control and Coordination of Connected and Automated Vehicles
H M Sabbir Ahmad, Ehsan Sabouni, Wei Xiao, Christos G. Cassandras, Wenchao Li
May 25, 2023
Don't Retrain, Just Rewrite: Countering Adversarial Perturbations by Rewriting Text
Ashim Gupta, Carter Wood Blum, Temma Choji, Yingjie Fei, Shalin Shah, Alakananda Vempala, Vivek SrikumarAdversarial Attacks on Leakage Detectors in Water Distribution Networks
Paul Stahlhofen, André Artelt, Luca Hermes, Barbara HammerIDEA: Invariant Defense for Graph Adversarial Robustness
Shuchang Tao, Qi Cao, Huawei Shen, Yunfan Wu, Bingbing Xu, Xueqi ChengPEARL: Preprocessing Enhanced Adversarial Robust Learning of Image Deraining for Semantic Segmentation
Xianghao Jiao, Yaohua Liu, Jiaxin Gao, Xinyuan Chu, Risheng Liu, Xin Fan
May 24, 2023
How do humans perceive adversarial text? A reality check on the validity and naturalness of word-based adversarial attacks
Salijona Dyrmishi, Salah Ghamizi, Maxime CordyFrequency maps reveal the correlation between Adversarial Attacks and Implicit Bias
Lorenzo Basile, Nikos Karantzas, Alberto d'Onofrio, Luca Manzoni, Luca Bortolussi, Alex Rodriguez, Fabio Anselmi