Adversarial Attack
Adversarial attacks aim to deceive machine learning models by subtly altering input data, causing misclassifications or other erroneous outputs. Current research focuses on developing more robust models and detection methods, exploring various attack strategies across different model architectures (including vision transformers, recurrent neural networks, and graph neural networks) and data types (images, text, signals, and tabular data). Understanding and mitigating these attacks is crucial for ensuring the reliability and security of AI systems in diverse applications, from autonomous vehicles to medical diagnosis and cybersecurity.
Papers
Diffusion Attack: Leveraging Stable Diffusion for Naturalistic Image Attacking
Qianyu Guo, Jiaming Fu, Yawen Lu, Dongming Gan
Adversary-Robust Graph-Based Learning of WSIs
Saba Heidari Gheshlaghi, Milan Aryal, Nasim Yahyasoltani, Masoud Ganji
Adversary-Augmented Simulation to evaluate fairness on HyperLedger Fabric
Erwan Mahe, Rouwaida Abdallah, Sara Tucci-Piergiovanni, Pierre-Yves Piriou
Adversarial Attacks and Defenses in Fault Detection and Diagnosis: A Comprehensive Benchmark on the Tennessee Eastman Process
Vitaliy Pozdnyakov, Aleksandr Kovalenko, Ilya Makarov, Mikhail Drobyshevskiy, Kirill Lukyanov
DD-RobustBench: An Adversarial Robustness Benchmark for Dataset Distillation
Yifan Wu, Jiawei Du, Ping Liu, Yuewei Lin, Wei Xu, Wenqing Cheng
ADAPT to Robustify Prompt Tuning Vision Transformers
Masih Eskandar, Tooba Imtiaz, Zifeng Wang, Jennifer Dy
RigorLLM: Resilient Guardrails for Large Language Models against Undesired Content
Zhuowen Yuan, Zidi Xiong, Yi Zeng, Ning Yu, Ruoxi Jia, Dawn Song, Bo Li
Boosting Transferability in Vision-Language Attacks via Diversification along the Intersection Region of Adversarial Trajectory
Sensen Gao, Xiaojun Jia, Xuhong Ren, Ivor Tsang, Qing Guo
SSCAE -- Semantic, Syntactic, and Context-aware natural language Adversarial Examples generator
Javad Rafiei Asl, Mohammad H. Rafiei, Manar Alohaly, Daniel Takabi
Problem space structural adversarial attacks for Network Intrusion Detection Systems based on Graph Neural Networks
Andrea Venturi, Dario Stabili, Mirco Marchetti
LocalStyleFool: Regional Video Style Transfer Attack Using Segment Anything Model
Yuxin Cao, Jinghao Li, Xi Xiao, Derui Wang, Minhui Xue, Hao Ge, Wei Liu, Guangwu Hu
Robust Overfitting Does Matter: Test-Time Adversarial Purification With FGSM
Linyu Tang, Lei Zhang
Defense Against Adversarial Attacks on No-Reference Image Quality Models with Gradient Norm Regularization
Yujia Liu, Chenxi Yang, Dingquan Li, Jianhao Ding, Tingting Jiang