Adversarial Attack
Adversarial attacks aim to deceive machine learning models by subtly altering input data, causing misclassifications or other erroneous outputs. Current research focuses on developing more robust models and detection methods, exploring various attack strategies across different model architectures (including vision transformers, recurrent neural networks, and graph neural networks) and data types (images, text, signals, and tabular data). Understanding and mitigating these attacks is crucial for ensuring the reliability and security of AI systems in diverse applications, from autonomous vehicles to medical diagnosis and cybersecurity.
Papers
EaTVul: ChatGPT-based Evasion Attack Against Software Vulnerability Detection
Shigang Liu, Di Cao, Junae Kim, Tamas Abraham, Paul Montague, Seyit Camtepe, Jun Zhang, Yang Xiang
Debiased Graph Poisoning Attack via Contrastive Surrogate Objective
Kanghoon Yoon, Yeonjun In, Namkyeong Lee, Kibum Kim, Chanyoung Park
S-E Pipeline: A Vision Transformer (ViT) based Resilient Classification Pipeline for Medical Imaging Against Adversarial Attacks
Neha A S, Vivek Chaturvedi, Muhammad Shafique
Algebraic Adversarial Attacks on Integrated Gradients
Lachlan Simpson, Federico Costanza, Kyle Millar, Adriel Cheng, Cheng-Chew Lim, Hong Gunn Chew
Enhancing Transferability of Targeted Adversarial Examples: A Self-Universal Perspective
Bowen Peng, Li Liu, Tianpeng Liu, Zhen Liu, Yongxiang Liu
On Feasibility of Intent Obfuscating Attacks
Zhaobin Li, Patrick Shafto
Imposter.AI: Adversarial Attacks with Hidden Intentions towards Aligned Large Language Models
Xiao Liu, Liangzhi Li, Tong Xiang, Fuying Ye, Lu Wei, Wangyue Li, Noa Garcia
Towards Robust Vision Transformer via Masked Adaptive Ensemble
Fudong Lin, Jiadong Lou, Xu Yuan, Nian-Feng Tzeng