Adversarial Attack
Adversarial attacks aim to deceive machine learning models by subtly altering input data, causing misclassifications or other erroneous outputs. Current research focuses on developing more robust models and detection methods, exploring various attack strategies across different model architectures (including vision transformers, recurrent neural networks, and graph neural networks) and data types (images, text, signals, and tabular data). Understanding and mitigating these attacks is crucial for ensuring the reliability and security of AI systems in diverse applications, from autonomous vehicles to medical diagnosis and cybersecurity.
Papers
SpecFormer: Guarding Vision Transformer Robustness via Maximum Singular Value Penalization
Xixu Hu, Runkai Zheng, Jindong Wang, Cheuk Hang Leung, Qi Wu, Xing Xie
Dual Teacher Knowledge Distillation with Domain Alignment for Face Anti-spoofing
Zhe Kong, Wentian Zhang, Tao Wang, Kaihao Zhang, Yuexiang Li, Xiaoying Tang, Wenhan Luo
Explainability-Based Adversarial Attack on Graphs Through Edge Perturbation
Dibaloke Chanda, Saba Heidari Gheshlaghi, Nasim Yahya Soltani
Attack Tree Analysis for Adversarial Evasion Attacks
Yuki Yamaguchi, Toshiaki Aoki
Adversarial Attacks on Image Classification Models: Analysis and Defense
Jaydip Sen, Abhiraj Sen, Ananda Chatterjee
A Malware Classification Survey on Adversarial Attacks and Defences
Mahesh Datta Sai Ponnuru, Likhitha Amasala, Tanu Sree Bhimavarapu, Guna Chaitanya Garikipati
Towards Transferable Targeted 3D Adversarial Attack in the Physical World
Yao Huang, Yinpeng Dong, Shouwei Ruan, Xiao Yang, Hang Su, Xingxing Wei