Adversarial Attack
Adversarial attacks aim to deceive machine learning models by subtly altering input data, causing misclassifications or other erroneous outputs. Current research focuses on developing more robust models and detection methods, exploring various attack strategies across different model architectures (including vision transformers, recurrent neural networks, and graph neural networks) and data types (images, text, signals, and tabular data). Understanding and mitigating these attacks is crucial for ensuring the reliability and security of AI systems in diverse applications, from autonomous vehicles to medical diagnosis and cybersecurity.
Papers
Distilling Adversarial Robustness Using Heterogeneous Teachers
Jieren Deng, Aaron Palmer, Rigel Mahmood, Ethan Rathbun, Jinbo Bi, Kaleel Mahmood, Derek Aguiar
A Robust Defense against Adversarial Attacks on Deep Learning-based Malware Detectors via (De)Randomized Smoothing
Daniel Gibert, Giulio Zizzo, Quan Le, Jordi Planes
On the Duality Between Sharpness-Aware Minimization and Adversarial Training
Yihao Zhang, Hangzhou He, Jingyu Zhu, Huanran Chen, Yifei Wang, Zeming Wei
An Adversarial Approach to Evaluating the Robustness of Event Identification Models
Obai Bahwal, Oliver Kosut, Lalitha Sankar
Robust CLIP: Unsupervised Adversarial Fine-Tuning of Vision Embeddings for Robust Large Vision-Language Models
Christian Schlarmann, Naman Deep Singh, Francesco Croce, Matthias Hein
Attacks on Node Attributes in Graph Neural Networks
Ying Xu, Michael Lanier, Anindya Sarkar, Yevgeniy Vorobeychik
Adversarial Feature Alignment: Balancing Robustness and Accuracy in Deep Learning via Adversarial Training
Leo Hyun Park, Jaeuk Kim, Myung Gyo Oh, Jaewoo Park, Taekyoung Kwon
AICAttack: Adversarial Image Captioning Attack with Attention-Based Optimization
Jiyao Li, Mingze Ni, Yifei Dong, Tianqing Zhu, Wei Liu
Self-Guided Robust Graph Structure Refinement
Yeonjun In, Kanghoon Yoon, Kibum Kim, Kijung Shin, Chanyoung Park
DART: A Principled Approach to Adversarially Robust Unsupervised Domain Adaptation
Yunjuan Wang, Hussein Hazimeh, Natalia Ponomareva, Alexey Kurakin, Ibrahim Hammoud, Raman Arora
Quantum-Inspired Analysis of Neural Network Vulnerabilities: The Role of Conjugate Variables in System Attacks
Jun-Jie Zhang, Deyu Meng