Adversarial Attack
Adversarial attacks aim to deceive machine learning models by subtly altering input data, causing misclassifications or other erroneous outputs. Current research focuses on developing more robust models and detection methods, exploring various attack strategies across different model architectures (including vision transformers, recurrent neural networks, and graph neural networks) and data types (images, text, signals, and tabular data). Understanding and mitigating these attacks is crucial for ensuring the reliability and security of AI systems in diverse applications, from autonomous vehicles to medical diagnosis and cybersecurity.
Papers
MMAD-Purify: A Precision-Optimized Framework for Efficient and Scalable Multi-Modal Attacks
Xinxin Liu, Zhongliang Guo, Siyuan Huang, Chun Pong Lau
Multi-style conversion for semantic segmentation of lesions in fundus images by adversarial attacks
Clément Playout, Renaud Duval, Marie Carole Boucher, Farida Cheriet
Hiding-in-Plain-Sight (HiPS) Attack on CLIP for Targetted Object Removal from Images
Arka Daw, Megan Hong-Thanh Chung, Maria Mahbub, Amir Sadovnik
Unitary Multi-Margin BERT for Robust Natural Language Processing
Hao-Yuan Chang, Kang L. Wang
Low-Rank Adversarial PGD Attack
Dayana Savostianova, Emanuele Zangrando, Francesco Tudisco
Perseus: Leveraging Common Data Patterns with Curriculum Learning for More Robust Graph Neural Networks
Kaiwen Xia, Huijun Wu, Duanyu Li, Min Xie, Ruibo Wang, Wenzhe Zhang
DAT: Improving Adversarial Robustness via Generative Amplitude Mix-up in Frequency Domain
Fengpeng Li, Kemou Li, Haiwei Wu, Jinyu Tian, Jiantao Zhou
Taking off the Rose-Tinted Glasses: A Critical Look at Adversarial ML Through the Lens of Evasion Attacks
Kevin Eykholt, Farhan Ahmed, Pratik Vaishnavi, Amir Rahmati
Security of and by Generative AI platforms
Hari Hayagreevan, Souvik Khamaru
Efficient and Effective Universal Adversarial Attack against Vision-Language Pre-training Models
Fan Yang, Yihao Huang, Kailong Wang, Ling Shi, Geguang Pu, Yang Liu, Haoyu Wang
How to Backdoor Consistency Models?
Chengen Wang, Murat Kantarcioglu
Adversarially Robust Out-of-Distribution Detection Using Lyapunov-Stabilized Embeddings
Hossein Mirzaei, Mackenzie W. Mathis
Towards Calibrated Losses for Adversarial Robust Reject Option Classification
Vrund Shah, Tejas Chaudhari, Naresh Manwani
Time Traveling to Defend Against Adversarial Example Attacks in Image Classification
Anthony Etim, Jakub Szefer
Towards Assurance of LLM Adversarial Robustness using Ontology-Driven Argumentation
Tomas Bueno Momcilovic, Beat Buesser, Giulio Zizzo, Mark Purcell, Tomas Bueno Momcilovic
RAB$^2$-DEF: Dynamic and explainable defense against adversarial attacks in Federated Learning to fair poor clients
Nuria Rodríguez-Barroso, M. Victoria Luzón, Francisco Herrera