Adversarial Attack
Adversarial attacks aim to deceive machine learning models by subtly altering input data, causing misclassifications or other erroneous outputs. Current research focuses on developing more robust models and detection methods, exploring various attack strategies across different model architectures (including vision transformers, recurrent neural networks, and graph neural networks) and data types (images, text, signals, and tabular data). Understanding and mitigating these attacks is crucial for ensuring the reliability and security of AI systems in diverse applications, from autonomous vehicles to medical diagnosis and cybersecurity.
Papers
Guidance Through Surrogate: Towards a Generic Diagnostic Attack
Muzammal Naseer, Salman Khan, Fatih Porikli, Fahad Shahbaz Khan
Adversarial attacks and defenses on ML- and hardware-based IoT device fingerprinting and identification
Pedro Miguel Sánchez Sánchez, Alberto Huertas Celdrán, Gérôme Bovet, Gregorio Martínez Pérez
Defense Against Adversarial Attacks on Audio DeepFake Detection
Piotr Kawa, Marcin Plata, Piotr Syga
In and Out-of-Domain Text Adversarial Robustness via Label Smoothing
Yahan Yang, Soham Dan, Dan Roth, Insup Lee
A Comprehensive Study of the Robustness for LiDAR-based 3D Object Detectors against Adversarial Attacks
Yifan Zhang, Junhui Hou, Yixuan Yuan
Multi-head Uncertainty Inference for Adversarial Attack Detection
Yuqi Yang, Songyun Yang, Jiyang Xie. Zhongwei Si, Kai Guo, Ke Zhang, Kongming Liang
SAIF: Sparse Adversarial and Imperceptible Attack Framework
Tooba Imtiaz, Morgan Kohler, Jared Miller, Zifeng Wang, Mario Sznaier, Octavia Camps, Jennifer Dy
Synthesis of Adversarial DDOS Attacks Using Tabular Generative Adversarial Networks
Abdelmageed Ahmed Hassan, Mohamed Sayed Hussein, Ahmed Shehata AboMoustafa, Sarah Hossam Elmowafy