Adversarial Attack
Adversarial attacks aim to deceive machine learning models by subtly altering input data, causing misclassifications or other erroneous outputs. Current research focuses on developing more robust models and detection methods, exploring various attack strategies across different model architectures (including vision transformers, recurrent neural networks, and graph neural networks) and data types (images, text, signals, and tabular data). Understanding and mitigating these attacks is crucial for ensuring the reliability and security of AI systems in diverse applications, from autonomous vehicles to medical diagnosis and cybersecurity.
Papers
BankTweak: Adversarial Attack against Multi-Object Trackers by Manipulating Feature Banks
Woojin Shin, Donghwa Kang, Daejin Choi, Brent Kang, Jinkyu Lee, Hyeongboo Baek
Enhancing Transferability of Adversarial Attacks with GE-AdvGAN+: A Comprehensive Framework for Gradient Editing
Zhibo Jin, Jiayu Zhang, Zhiyu Zhu, Chenyu Zhang, Jiahao Huang, Jianlong Zhou, Fang Chen
Leveraging Information Consistency in Frequency and Spatial Domain for Adversarial Attacks
Zhibo Jin, Jiayu Zhang, Zhiyu Zhu, Xinyi Wang, Yiyun Huang, Huaming Chen
Query-Efficient Video Adversarial Attack with Stylized Logo
Duoxun Tang, Yuxin Cao, Xi Xiao, Derui Wang, Sheng Wen, Tianqing Zhu
First line of defense: A robust first layer mitigates adversarial attacks
Janani Suresh, Nancy Nayak, Sheetal Kalyani
Latent Feature and Attention Dual Erasure Attack against Multi-View Diffusion Models for 3D Assets Protection
Jingwei Sun, Xuchong Zhang, Changfeng Sun, Qicheng Bai, Hongbin Sun
Correlation Analysis of Adversarial Attack in Time Series Classification
Zhengyang Li, Wenhao Liang, Chang Dong, Weitong Chen, Dong Huang
Revisiting Min-Max Optimization Problem in Adversarial Training
Sina Hajer Ahmadi, Hassan Bahrami
A Grey-box Attack against Latent Diffusion Model-based Image Editing by Posterior Collapse
Zhongliang Guo, Lei Fang, Jingyu Lin, Yifei Qian, Shuai Zhao, Zeyu Wang, Junhao Dong, Cunjian Chen, Ognjen Arandjelović, Chun Pong Lau
Towards Efficient Formal Verification of Spiking Neural Network
Baekryun Seong, Jieung Kim, Sang-Ki Ko
Adversarial Attack for Explanation Robustness of Rationalization Models
Yuankai Zhang, Lingxiao Kong, Haozhao Wang, Ruixuan Li, Jun Wang, Yuhua Li, Wei Liu
Security Assessment of Hierarchical Federated Deep Learning
D Alqattan, R Sun, H Liang, G Nicosia, V Snasel, R Ranjan, V Ojha
MsMemoryGAN: A Multi-scale Memory GAN for Palm-vein Adversarial Purification
Huafeng Qin, Yuming Fu, Huiyan Zhang, Mounim A. El-Yacoubi, Xinbo Gao, Qun Song, Jun Wang
Privacy-preserving Universal Adversarial Defense for Black-box Models
Qiao Li, Cong Wu, Jing Chen, Zijun Zhang, Kun He, Ruiying Du, Xinxin Wang, Qingchuang Zhao, Yang Liu
Robust Image Classification: Defensive Strategies against FGSM and PGD Adversarial Attacks
Hetvi Waghela, Jaydip Sen, Sneha Rakshit