Adversarial Attack
Adversarial attacks aim to deceive machine learning models by subtly altering input data, causing misclassifications or other erroneous outputs. Current research focuses on developing more robust models and detection methods, exploring various attack strategies across different model architectures (including vision transformers, recurrent neural networks, and graph neural networks) and data types (images, text, signals, and tabular data). Understanding and mitigating these attacks is crucial for ensuring the reliability and security of AI systems in diverse applications, from autonomous vehicles to medical diagnosis and cybersecurity.
Papers
Longitudinal Mammogram Exam-based Breast Cancer Diagnosis Models: Vulnerability to Adversarial Attacks
Zhengbo Zhou, Degan Hao, Dooman Arefan, Margarita Zuley, Jules Sumkin, Shandong Wu
Embedding-based classifiers can detect prompt injection attacks
Md. Ahsan Ayub, Subhabrata Majumdar
Enhancing Adversarial Attacks through Chain of Thought
Jingbo Su
AdvI2I: Adversarial Image Attack on Image-to-Image Diffusion models
Yaopei Zeng, Yuanpu Cao, Bochuan Cao, Yurui Chang, Jinghui Chen, Lu Lin
SeriesGAN: Time Series Generation via Adversarial and Autoregressive Learning
MohammadReza EskandariNasab, Shah Muhammad Hamdi, Soukaina Filali Boubrahimi
Resilience in Knowledge Graph Embeddings
Arnab Sharma, N'Dah Jean Kouagou, Axel-Cyrille Ngonga Ngomo
Evaluating the Robustness of LiDAR Point Cloud Tracking Against Adversarial Attack
Shengjing Tian, Yinan Han, Xiantong Zhao, Bin Liu, Xiuping Liu
On the Geometry of Regularization in Adversarial Training: High-Dimensional Asymptotics and Generalization Bounds
Matteo Vilucchio, Nikolaos Tsilivis, Bruno Loureiro, Julia Kempe
Model Mimic Attack: Knowledge Distillation for Provably Transferable Adversarial Examples
Kirill Lukyanov, Andrew Perminov, Denis Turdakov, Mikhail Pautov