Adversarial Attack
Adversarial attacks aim to deceive machine learning models by subtly altering input data, causing misclassifications or other erroneous outputs. Current research focuses on developing more robust models and detection methods, exploring various attack strategies across different model architectures (including vision transformers, recurrent neural networks, and graph neural networks) and data types (images, text, signals, and tabular data). Understanding and mitigating these attacks is crucial for ensuring the reliability and security of AI systems in diverse applications, from autonomous vehicles to medical diagnosis and cybersecurity.
Papers
RFLA: A Stealthy Reflected Light Adversarial Attack in the Physical World
Donghua Wang, Wen Yao, Tingsong Jiang, Chao Li, Xiaoqian Chen
On the Sensitivity of Deep Load Disaggregation to Adversarial Attacks
Hafsa Bousbiat, Yassine Himeur, Abbes Amira, Wathiq Mansoor
Frequency Domain Adversarial Training for Robust Volumetric Medical Segmentation
Asif Hanif, Muzammal Naseer, Salman Khan, Mubarak Shah, Fahad Shahbaz Khan
Vulnerability-Aware Instance Reweighting For Adversarial Training
Olukorede Fakorede, Ashutosh Kumar Nirala, Modeste Atsague, Jin Tian
A Theoretical Perspective on Subnetwork Contributions to Adversarial Robustness
Jovon Craig, Josh Andle, Theodore S. Nowak, Salimeh Yasaei Sekeh
Fooling Contrastive Language-Image Pre-trained Models with CLIPMasterPrints
Matthias Freiberger, Peter Kun, Christian Igel, Anders Sundnes Løvlie, Sebastian Risi