Adversarial Attack
Adversarial attacks aim to deceive machine learning models by subtly altering input data, causing misclassifications or other erroneous outputs. Current research focuses on developing more robust models and detection methods, exploring various attack strategies across different model architectures (including vision transformers, recurrent neural networks, and graph neural networks) and data types (images, text, signals, and tabular data). Understanding and mitigating these attacks is crucial for ensuring the reliability and security of AI systems in diverse applications, from autonomous vehicles to medical diagnosis and cybersecurity.
Papers
Universal Distributional Decision-based Black-box Adversarial Attack with Reinforcement Learning
Yiran Huang, Yexu Zhou, Michael Hefenbrock, Till Riedel, Likun Fang, Michael Beigl
Resisting Graph Adversarial Attack via Cooperative Homophilous Augmentation
Zhihao Zhu, Chenwang Wu, Min Zhou, Hao Liao, Defu Lian, Enhong Chen
MORA: Improving Ensemble Robustness Evaluation with Model-Reweighing Attack
Yunrui Yu, Xitong Gao, Cheng-Zhong Xu
Test-time adversarial detection and robustness for localizing humans using ultra wide band channel impulse responses
Abhiram Kolli, Muhammad Jehanzeb Mirza, Horst Possegger, Horst Bischof
Impact of Adversarial Training on Robustness and Generalizability of Language Models
Enes Altinisik, Hassan Sajjad, Husrev Taha Sencar, Safa Messaoud, Sanjay Chawla
Robust Smart Home Face Recognition under Starving Federated Data
Jaechul Roh, Yajun Fang
Visually Adversarial Attacks and Defenses in the Physical World: A Survey
Xingxing Wei, Bangzheng Pu, Jiefan Lu, Baoyuan Wu
Leveraging Domain Features for Detecting Adversarial Attacks Against Deep Speech Recognition in Noise
Christian Heider Nielsen, Zheng-Hua Tan
Data-free Defense of Black Box Models Against Adversarial Attacks
Gaurav Kumar Nayak, Inder Khatri, Ruchit Rawal, Anirban Chakraborty
Isometric Representations in Neural Networks Improve Robustness
Kosio Beshkov, Jonas Verhellen, Mikkel Elle Lepperød
Certified Robustness of Quantum Classifiers against Adversarial Examples through Quantum Noise
Jhih-Cing Huang, Yu-Lin Tsai, Chao-Han Huck Yang, Cheng-Fang Su, Chia-Mu Yu, Pin-Yu Chen, Sy-Yen Kuo
LMD: A Learnable Mask Network to Detect Adversarial Examples for Speaker Verification
Xing Chen, Jie Wang, Xiao-Lei Zhang, Wei-Qiang Zhang, Kunde Yang