Adversarial Attack
Adversarial attacks aim to deceive machine learning models by subtly altering input data, causing misclassifications or other erroneous outputs. Current research focuses on developing more robust models and detection methods, exploring various attack strategies across different model architectures (including vision transformers, recurrent neural networks, and graph neural networks) and data types (images, text, signals, and tabular data). Understanding and mitigating these attacks is crucial for ensuring the reliability and security of AI systems in diverse applications, from autonomous vehicles to medical diagnosis and cybersecurity.
Papers
Robust Subgraph Learning by Monitoring Early Training Representations
Sepideh Neshatfar, Salimeh Yasaei Sekeh
Towards White Box Deep Learning
Maciej Satkiewicz
What Sketch Explainability Really Means for Downstream Tasks
Hmrishav Bandyopadhyay, Pinaki Nath Chowdhury, Ayan Kumar Bhunia, Aneeshan Sain, Tao Xiang, Yi-Zhe Song
Adversarial Fine-tuning of Compressed Neural Networks for Joint Improvement of Robustness and Efficiency
Hallgrimur Thorsteinsson, Valdemar J Henriksen, Tong Chen, Raghavendra Selvan
Adversarial Training with OCR Modality Perturbation for Scene-Text Visual Question Answering
Zhixuan Shen, Haonan Luo, Sijia Li, Tianrui Li
Soften to Defend: Towards Adversarial Robustness via Self-Guided Label Refinement
Daiwei Yu, Zhuorong Li, Lina Wei, Canghong Jin, Yun Zhang, Sixian Chan
The Impact of Quantization on the Robustness of Transformer-based Text Classifiers
Seyed Parsa Neshaei, Yasaman Boreshban, Gholamreza Ghassem-Sani, Seyed Abolghasem Mirroshandel
Hide in Thicket: Generating Imperceptible and Rational Adversarial Perturbations on 3D Point Clouds
Tianrui Lou, Xiaojun Jia, Jindong Gu, Li Liu, Siyuan Liang, Bangyan He, Xiaochun Cao
Exploring the Adversarial Frontier: Quantifying Robustness via Adversarial Hypervolume
Ping Guo, Cheng Gong, Xi Lin, Zhiyuan Yang, Qingfu Zhang
Defending Against Unforeseen Failure Modes with Latent Adversarial Training
Stephen Casper, Lennart Schulze, Oam Patel, Dylan Hadfield-Menell
Improving Adversarial Training using Vulnerability-Aware Perturbation Budget
Olukorede Fakorede, Modeste Atsague, Jin Tian
Effect of Ambient-Intrinsic Dimension Gap on Adversarial Vulnerability
Rajdeep Haldar, Yue Xing, Qifan Song
Adversarial Infrared Geometry: Using Geometry to Perform Adversarial Attack against Infrared Pedestrian Detectors
Kalibinuer Tiliwalidi