Adversarial Attack
Adversarial attacks aim to deceive machine learning models by subtly altering input data, causing misclassifications or other erroneous outputs. Current research focuses on developing more robust models and detection methods, exploring various attack strategies across different model architectures (including vision transformers, recurrent neural networks, and graph neural networks) and data types (images, text, signals, and tabular data). Understanding and mitigating these attacks is crucial for ensuring the reliability and security of AI systems in diverse applications, from autonomous vehicles to medical diagnosis and cybersecurity.
Papers
Embodied Adversarial Attack: A Dynamic Robust Physical Attack in Autonomous Driving
Yitong Sun, Yao Huang, Xingxing Wei
Adversarial Robustness on Image Classification with $k$-means
Rollin Omari, Junae Kim, Paul Montague
Continual Adversarial Defense
Qian Wang, Yaoyao Liu, Hefei Ling, Yingwei Li, Qihao Liu, Ping Li, Jiazhong Chen, Alan Yuille, Ning Yu
Coevolutionary Algorithm for Building Robust Decision Trees under Minimax Regret
Adam Żychowski, Andrew Perrault, Jacek Mańdziuk
Forbidden Facts: An Investigation of Competing Objectives in Llama-2
Tony T. Wang, Miles Wang, Kaivalya Hariharan, Nir Shavit
AVA: Inconspicuous Attribute Variation-based Adversarial Attack bypassing DeepFake Detection
Xiangtao Meng, Li Wang, Shanqing Guo, Lei Ju, Qingchuan Zhao
Scalable Ensemble-based Detection Method against Adversarial Attacks for speaker verification
Haibin Wu, Heng-Cheng Kuo, Yu Tsao, Hung-yi Lee
Universal Adversarial Framework to Improve Adversarial Robustness for Diabetic Retinopathy Detection
Samrat Mukherjee, Dibyanayan Bandyopadhyay, Baban Gain, Asif Ekbal
Efficient Representation of the Activation Space in Deep Neural Networks
Tanya Akumu, Celia Cintas, Girmaw Abebe Tadesse, Adebayo Oshingbesan, Skyler Speakman, Edward McFowland
Radio Signal Classification by Adversarially Robust Quantum Machine Learning
Yanqiu Wu, Eromanga Adermann, Chandra Thapa, Seyit Camtepe, Hajime Suzuki, Muhammad Usman
ReRoGCRL: Representation-based Robustness in Goal-Conditioned Reinforcement Learning
Xiangyu Yin, Sihao Wu, Jiaxu Liu, Meng Fang, Xingyu Zhao, Xiaowei Huang, Wenjie Ruan
Eroding Trust In Aerial Imagery: Comprehensive Analysis and Evaluation Of Adversarial Attacks In Geospatial Systems
Michael Lanier, Aayush Dhakal, Zhexiao Xiong, Arthur Li, Nathan Jacobs, Yevgeniy Vorobeychik
SSTA: Salient Spatially Transformed Attack
Renyang Liu, Wei Zhou, Sixin Wu, Jun Zhao, Kwok-Yan Lam
Attacking the Loop: Adversarial Attacks on Graph-based Loop Closure Detection
Jonathan J. Y. Kim, Martin Urschler, Patricia J. Riddle, Jorg S. Wicker