Adversarial Attack
Adversarial attacks aim to deceive machine learning models by subtly altering input data, causing misclassifications or other erroneous outputs. Current research focuses on developing more robust models and detection methods, exploring various attack strategies across different model architectures (including vision transformers, recurrent neural networks, and graph neural networks) and data types (images, text, signals, and tabular data). Understanding and mitigating these attacks is crucial for ensuring the reliability and security of AI systems in diverse applications, from autonomous vehicles to medical diagnosis and cybersecurity.
Papers
MultiAgent Collaboration Attack: Investigating Adversarial Attacks in Large Language Model Collaborations via Debate
Alfonso Amayuelas, Xianjun Yang, Antonis Antoniades, Wenyue Hua, Liangming Pan, William Wang
Jailbreaking as a Reward Misspecification Problem
Zhihui Xie, Jiahui Gao, Lei Li, Zhenguo Li, Qi Liu, Lingpeng Kong
Explainable AI Security: Exploring Robustness of Graph Neural Networks to Adversarial Attacks
Tao Wu, Canyixing Cui, Xingping Xian, Shaojie Qiao, Chao Wang, Lin Yuan, Shui Yu
NoiSec: Harnessing Noise for Security against Adversarial and Backdoor Attacks
Md Hasan Shahriar, Ning Wang, Y. Thomas Hou, Wenjing Lou
MaskPure: Improving Defense Against Text Adversaries with Stochastic Purification
Harrison Gietz, Jugal Kalita
Can Go AIs be adversarially robust?
Tom Tseng, Euan McLean, Kellin Pelrine, Tony T. Wang, Adam Gleave
Dissecting Adversarial Robustness of Multimodal LM Agents
Chen Henry Wu, Rishi Shah, Jing Yu Koh, Ruslan Salakhutdinov, Daniel Fried, Aditi Raghunathan
Adversarial Attacks on Large Language Models in Medicine
Yifan Yang, Qiao Jin, Furong Huang, Zhiyong Lu
MirrorCheck: Efficient Adversarial Defense for Vision-Language Models
Samar Fares, Klea Ziu, Toluwani Aremu, Nikita Durasov, Martin Takáč, Pascal Fua, Karthik Nandakumar, Ivan Laptev
Potion: Towards Poison Unlearning
Stefan Schoepf, Jack Foster, Alexandra Brintrup
Improving Adversarial Robustness via Feature Pattern Consistency Constraint
Jiacong Hu, Jingwen Ye, Zunlei Feng, Jiazhen Yang, Shunyu Liu, Xiaotian Yu, Lingxiang Jia, Mingli Song