Adversarial Attack
Adversarial attacks aim to deceive machine learning models by subtly altering input data, causing misclassifications or other erroneous outputs. Current research focuses on developing more robust models and detection methods, exploring various attack strategies across different model architectures (including vision transformers, recurrent neural networks, and graph neural networks) and data types (images, text, signals, and tabular data). Understanding and mitigating these attacks is crucial for ensuring the reliability and security of AI systems in diverse applications, from autonomous vehicles to medical diagnosis and cybersecurity.
Papers
Explainability-Based Adversarial Attack on Graphs Through Edge Perturbation
Dibaloke Chanda, Saba Heidari Gheshlaghi, Nasim Yahya Soltani
Attack Tree Analysis for Adversarial Evasion Attacks
Yuki Yamaguchi, Toshiaki Aoki
Adversarial Attacks on Image Classification Models: Analysis and Defense
Jaydip Sen, Abhiraj Sen, Ananda Chatterjee
A Malware Classification Survey on Adversarial Attacks and Defences
Mahesh Datta Sai Ponnuru, Likhitha Amasala, Tanu Sree Bhimavarapu, Guna Chaitanya Garikipati
Towards Transferable Targeted 3D Adversarial Attack in the Physical World
Yao Huang, Yinpeng Dong, Shouwei Ruan, Xiao Yang, Hang Su, Xingxing Wei
Embodied Adversarial Attack: A Dynamic Robust Physical Attack in Autonomous Driving
Yitong Sun, Yao Huang, Xingxing Wei
Adversarial Robustness on Image Classification with $k$-means
Rollin Omari, Junae Kim, Paul Montague
Continual Adversarial Defense
Qian Wang, Yaoyao Liu, Hefei Ling, Yingwei Li, Qihao Liu, Ping Li, Jiazhong Chen, Alan Yuille, Ning Yu