Adversarial Attack
Adversarial attacks aim to deceive machine learning models by subtly altering input data, causing misclassifications or other erroneous outputs. Current research focuses on developing more robust models and detection methods, exploring various attack strategies across different model architectures (including vision transformers, recurrent neural networks, and graph neural networks) and data types (images, text, signals, and tabular data). Understanding and mitigating these attacks is crucial for ensuring the reliability and security of AI systems in diverse applications, from autonomous vehicles to medical diagnosis and cybersecurity.
Papers
Adversarial Attacks and Defenses on Text-to-Image Diffusion Models: A Survey
Chenyu Zhang, Mingwang Hu, Wenhui Li, Lanjun Wang
Targeted Augmented Data for Audio Deepfake Detection
Marcella Astrid, Enjie Ghorbel, Djamila Aouada
Was it Slander? Towards Exact Inversion of Generative Language Models
Adrians Skapars, Edoardo Manino, Youcheng Sun, Lucas C. Cordeiro
A Survey of Attacks on Large Vision-Language Models: Resources, Advances, and Future Trends
Daizong Liu, Mingyu Yang, Xiaoye Qu, Pan Zhou, Yu Cheng, Wei Hu
Performance Evaluation of Knowledge Graph Embedding Approaches under Non-adversarial Attacks
Sourabh Kapoor, Arnab Sharma, Michael Röder, Caglar Demir, Axel-Cyrille Ngonga Ngomo
A Hybrid Training-time and Run-time Defense Against Adversarial Attacks in Modulation Classification
Lu Zhang, Sangarapillai Lambotharan, Gan Zheng, Guisheng Liao, Ambra Demontis, Fabio Roli
Universal Multi-view Black-box Attack against Object Detectors via Layout Optimization
Donghua Wang, Wen Yao, Tingsong Jiang, Chao Li, Xiaoqian Chen
On Evaluating The Performance of Watermarked Machine-Generated Texts Under Adversarial Attacks
Zesen Liu, Tianshuo Cong, Xinlei He, Qi Li
Remembering Everything Makes You Vulnerable: A Limelight on Machine Unlearning for Personalized Healthcare Sector
Ahan Chatterjee, Sai Anirudh Aryasomayajula, Rajat Chaudhari, Subhajit Paul, Vishwa Mohan Singh
Jailbreak Attacks and Defenses Against Large Language Models: A Survey
Sibo Yi, Yule Liu, Zhen Sun, Tianshuo Cong, Xinlei He, Jiaxing Song, Ke Xu, Qi Li