Adversarial Attack
Adversarial attacks aim to deceive machine learning models by subtly altering input data, causing misclassifications or other erroneous outputs. Current research focuses on developing more robust models and detection methods, exploring various attack strategies across different model architectures (including vision transformers, recurrent neural networks, and graph neural networks) and data types (images, text, signals, and tabular data). Understanding and mitigating these attacks is crucial for ensuring the reliability and security of AI systems in diverse applications, from autonomous vehicles to medical diagnosis and cybersecurity.
Papers
Improving Adversarial Training using Vulnerability-Aware Perturbation Budget
Olukorede Fakorede, Modeste Atsague, Jin Tian
Effect of Ambient-Intrinsic Dimension Gap on Adversarial Vulnerability
Rajdeep Haldar, Yue Xing, Qifan Song
Adversarial Infrared Geometry: Using Geometry to Perform Adversarial Attack against Infrared Pedestrian Detectors
Kalibinuer Tiliwalidi
Resilience of Entropy Model in Distributed Neural Networks
Milin Zhang, Mohammad Abdi, Shahriar Rifat, Francesco Restuccia
Robust Deep Reinforcement Learning Through Adversarial Attacks and Training : A Survey
Lucas Schott, Josephine Delas, Hatem Hajri, Elies Gherbi, Reda Yaich, Nora Boulahia-Cuppens, Frederic Cuppens, Sylvain Lamprier
Unraveling Adversarial Examples against Speaker Identification -- Techniques for Attack Detection and Victim Model Classification
Sonal Joshi, Thomas Thebaud, Jesús Villalba, Najim Dehak
MPAT: Building Robust Deep Neural Networks against Textual Adversarial Attacks
Fangyuan Zhang, Huichi Zhou, Shuangjiao Li, Hongtao Wang