Adversarial Attack
Adversarial attacks aim to deceive machine learning models by subtly altering input data, causing misclassifications or other erroneous outputs. Current research focuses on developing more robust models and detection methods, exploring various attack strategies across different model architectures (including vision transformers, recurrent neural networks, and graph neural networks) and data types (images, text, signals, and tabular data). Understanding and mitigating these attacks is crucial for ensuring the reliability and security of AI systems in diverse applications, from autonomous vehicles to medical diagnosis and cybersecurity.
Papers
A Cost-Aware Approach to Adversarial Robustness in Neural Networks
Charles Meyers, Mohammad Reza Saleh Sedghpour, Tommy Löfstedt, Erik Elmroth
Introducing Perturb-ability Score (PS) to Enhance Robustness Against Evasion Adversarial Attacks on ML-NIDS
Mohamed elShehaby, Ashraf Matrawy
Enhancing adversarial robustness in Natural Language Inference using explanations
Alexandros Koulakos, Maria Lymperaiou, Giorgos Filandrianos, Giorgos Stamou
SoK: Security and Privacy Risks of Medical AI
Yuanhaur Chang, Han Liu, Evin Jaff, Chenyang Lu, Ning Zhang
D-CAPTCHA++: A Study of Resilience of Deepfake CAPTCHA under Transferable Imperceptible Adversarial Attack
Hong-Hanh Nguyen-Le, Van-Tuan Tran, Dinh-Thuc Nguyen, Nhien-An Le-Khac
Securing Vision-Language Models with a Robust Encoder Against Jailbreak and Adversarial Attacks
Md Zarif Hossain, Ahmed Imteaj
Module-wise Adaptive Adversarial Training for End-to-end Autonomous Driving
Tianyuan Zhang, Lu Wang, Jiaqi Kang, Xinwei Zhang, Siyuan Liang, Yuwei Chen, Aishan Liu, Xianglong Liu
Optimizing Neural Network Performance and Interpretability with Diophantine Equation Encoding
Ronald Katende
Personalized Federated Learning Techniques: Empirical Analysis
Azal Ahmad Khan, Ahmad Faraz Khan, Haider Ali, Ali Anwar
Adversarial Attacks to Multi-Modal Models
Zhihao Dou, Xin Hu, Haibo Yang, Zhuqing Liu, Minghong Fang
Unrevealed Threats: A Comprehensive Study of the Adversarial Robustness of Underwater Image Enhancement Models
Siyu Zhai, Zhibo He, Xiaofeng Cong, Junming Hou, Jie Gui, Jian Wei You, Xin Gong, James Tin-Yau Kwok, Yuan Yan Tang
PIP: Detecting Adversarial Examples in Large Vision-Language Models via Attention Patterns of Irrelevant Probe Questions
Yudong Zhang, Ruobing Xie, Jiansheng Chen, Xingwu Sun, Yu Wang
Vision-fused Attack: Advancing Aggressive and Stealthy Adversarial Text against Neural Machine Translation
Yanni Xue, Haojie Hao, Jiakai Wang, Qiang Sheng, Renshuai Tao, Yu Liang, Pu Feng, Xianglong Liu
2DSig-Detect: a semi-supervised framework for anomaly detection on image data using 2D-signatures
Xinheng Xie, Kureha Yamaguchi, Margaux Leblanc, Simon Malzard, Varun Chhabra, Victoria Nockles, Yue Wu