Adversarial Attack
Adversarial attacks aim to deceive machine learning models by subtly altering input data, causing misclassifications or other erroneous outputs. Current research focuses on developing more robust models and detection methods, exploring various attack strategies across different model architectures (including vision transformers, recurrent neural networks, and graph neural networks) and data types (images, text, signals, and tabular data). Understanding and mitigating these attacks is crucial for ensuring the reliability and security of AI systems in diverse applications, from autonomous vehicles to medical diagnosis and cybersecurity.
Papers
Spear and Shield: Adversarial Attacks and Defense Methods for Model-Based Link Prediction on Continuous-Time Dynamic Graphs
Dongjin Lee, Juho Lee, Kijung Shin
Enhancing Adversarial Attacks: The Similar Target Method
Shuo Zhang, Ziruo Wang, Zikai Zhou, Huanran Chen
On the Adversarial Robustness of Multi-Modal Foundation Models
Christian Schlarmann, Matthias Hein
Measuring the Effect of Causal Disentanglement on the Adversarial Robustness of Neural Network Models
Preben M. Ness, Dusica Marijan, Sunanda Bose
A Comparison of Adversarial Learning Techniques for Malware Detection
Pavla Louthánová, Matouš Kozák, Martin Jureček, Mark Stamp
Black-box Adversarial Attacks against Dense Retrieval Models: A Multi-view Contrastive Learning Method
Yu-An Liu, Ruqing Zhang, Jiafeng Guo, Maarten de Rijke, Wei Chen, Yixing Fan, Xueqi Cheng
Towards a Practical Defense against Adversarial Attacks on Deep Learning-based Malware Detectors via Randomized Smoothing
Daniel Gibert, Giulio Zizzo, Quan Le
AIR: Threats of Adversarial Attacks on Deep Learning-Based Information Recovery
Jinyin Chen, Jie Ge, Shilian Zheng, Linhui Ye, Haibin Zheng, Weiguo Shen, Keqiang Yue, Xiaoniu Yang
DFM-X: Augmentation by Leveraging Prior Knowledge of Shortcut Learning
Shunxin Wang, Christoph Brune, Raymond Veldhuis, Nicola Strisciuglio
On the Interplay of Convolutional Padding and Adversarial Robustness
Paul Gavrikov, Janis Keuper
Not So Robust After All: Evaluating the Robustness of Deep Neural Networks to Unseen Adversarial Attacks
Roman Garaev, Bader Rasheed, Adil Khan