Adversarial Attack
Adversarial attacks aim to deceive machine learning models by subtly altering input data, causing misclassifications or other erroneous outputs. Current research focuses on developing more robust models and detection methods, exploring various attack strategies across different model architectures (including vision transformers, recurrent neural networks, and graph neural networks) and data types (images, text, signals, and tabular data). Understanding and mitigating these attacks is crucial for ensuring the reliability and security of AI systems in diverse applications, from autonomous vehicles to medical diagnosis and cybersecurity.
Papers
BinarySelect to Improve Accessibility of Black-Box Attack Research
Shatarupa Ghosh, Jonathan Rusert
$\textrm{A}^{\textrm{2}}$RNet: Adversarial Attack Resilient Network for Robust Infrared and Visible Image Fusion
Jiawei Li, Hongwei Yu, Jiansheng Chen, Xinlong Ding, Jinlong Wang, Jinyuan Liu, Bochao Zou, Huimin Ma
Prompt2Perturb (P2P): Text-Guided Diffusion-Based Adversarial Attacks on Breast Ultrasound Images
Yasamin Medghalchi, Moein Heidari, Clayton Allard, Leonid Sigal, Ilker Hacihaliloglu
On the Generation and Removal of Speaker Adversarial Perturbation for Voice-Privacy Protection
Chenyang Guo, Liping Chen, Zhuhai Li, Kong Aik Lee, Zhen-Hua Ling, Wu Guo
Evaluating Adversarial Attacks on Traffic Sign Classifiers beyond Standard Baselines
Svetlana Pavlitska, Leopold Müller, J. Marius Zöllner
Deep Learning Model Security: Threats and Defenses
Tianyang Wang, Ziqian Bi, Yichao Zhang, Ming Liu, Weiche Hsieh, Pohsun Feng, Lawrence K.Q. Yan, Yizhu Wen, Benji Peng, Junyu Liu, Keyu Chen, Sen Zhang, Ming Li, Chuanqi Jiang, Xinyuan Song, Junjie Yang, Bowen Jing, Jintao Ren, Junhao Song, Hong-Ming Tseng, Silin Chen, Yunze Wang, Chia Xin Liang, Jiawei Xu, Xuanhe Pan, Jinlang Wang, Qian Niu
Exploiting the Index Gradients for Optimization-Based Jailbreaking on Large Language Models
Jiahui Li, Yongchang Hao, Haoyu Xu, Xing Wang, Yu Hong
How Does the Smoothness Approximation Method Facilitate Generalization for Federated Adversarial Learning?
Wenjun Ding, Ying An, Lixing Chen, Shichao Kan, Fan Wu, Zhe Qu
Doubly-Universal Adversarial Perturbations: Deceiving Vision-Language Models Across Both Images and Text with a Single Perturbation
Hee-Seon Kim, Minbeom Kim, Changick Kim
Adversarial Vulnerabilities in Large Language Models for Time Series Forecasting
Fuqiang Liu, Sicong Jiang, Luis Miranda-Moreno, Seongjin Choi, Lijun Sun
What You See Is Not Always What You Get: An Empirical Study of Code Comprehension by Large Language Models
Bangshuo Zhu, Jiawen Wen, Huaming Chen
MAGIC: Mastering Physical Adversarial Generation in Context through Collaborative LLM Agents
Yun Xing, Nhat Chung, Jie Zhang, Yue Cao, Ivor Tsang, Yang Liu, Lei Ma, Qing Guo
AHSG: Adversarial Attacks on High-level Semantics in Graph Neural Networks
Kai Yuan, Xiaobing Pei, Haoran Yang
Addressing Key Challenges of Adversarial Attacks and Defenses in the Tabular Domain: A Methodological Framework for Coherence and Consistency
Yael Itzhakev, Amit Giloni, Yuval Elovici, Asaf Shabtai
Backdoor Attacks against No-Reference Image Quality Assessment Models via A Scalable Trigger
Yi Yu, Song Xia, Xun Lin, Wenhan Yang, Shijian Lu, Yap-peng Tan, Alex Kot
A Generative Victim Model for Segmentation
Aixuan Li, Jing Zhang, Jiawei Shi, Yiran Zhong, Yuchao Dai
Adversarial Filtering Based Evasion and Backdoor Attacks to EEG-Based Brain-Computer Interfaces
Lubin Meng, Xue Jiang, Xiaoqing Chen, Wenzhong Liu, Hanbin Luo, Dongrui Wu
Defensive Dual Masking for Robust Adversarial Defense
Wangli Yang, Jie Yang, Yi Guo, Johan Barthelemy