Adversarial Attack
Adversarial attacks aim to deceive machine learning models by subtly altering input data, causing misclassifications or other erroneous outputs. Current research focuses on developing more robust models and detection methods, exploring various attack strategies across different model architectures (including vision transformers, recurrent neural networks, and graph neural networks) and data types (images, text, signals, and tabular data). Understanding and mitigating these attacks is crucial for ensuring the reliability and security of AI systems in diverse applications, from autonomous vehicles to medical diagnosis and cybersecurity.
Papers
On Evaluating Adversarial Robustness of Volumetric Medical Segmentation Models
Hashmat Shadab Malik, Numan Saeed, Asif Hanif, Muzammal Naseer, Mohammad Yaqub, Salman Khan, Fahad Shahbaz Khan
Transformation-Dependent Adversarial Attacks
Yaoteng Tan, Zikui Cai, M. Salman Asif
Improving Noise Robustness through Abstractions and its Impact on Machine Learning
Alfredo Ibias, Karol Capala, Varun Ravi Varma, Anna Drozdz, Jose Sousa
Adversarial Patch for 3D Local Feature Extractor
Yu Wen Pao, Li Chang Lai, Hong-Yi Lin
Adversarial Evasion Attack Efficiency against Large Language Models
João Vitorino, Eva Maia, Isabel Praça
I Don't Know You, But I Can Catch You: Real-Time Defense against Diverse Adversarial Patches for Object Detectors
Zijin Lin, Yue Zhao, Kai Chen, Jinwen He
Graph Transductive Defense: a Two-Stage Defense for Graph Membership Inference Attacks
Peizhi Niu, Chao Pan, Siheng Chen, Olgica Milenkovic
Are Objective Explanatory Evaluation metrics Trustworthy? An Adversarial Analysis
Prithwijit Chowdhury, Mohit Prabhushankar, Ghassan AlRegib, Mohamed Deriche
Compositional Curvature Bounds for Deep Neural Networks
Taha Entesari, Sina Sharifi, Mahyar Fazlyab
ADBA:Approximation Decision Boundary Approach for Black-Box Adversarial Attacks
Feiyang Wang, Xingquan Zuo, Hai Huang, Gang Chen
A Survey of Fragile Model Watermarking
Zhenzhe Gao, Yu Cheng, Zhaoxia Yin
Probabilistic Perspectives on Error Minimization in Adversarial Reinforcement Learning
Roman Belaire, Arunesh Sinha, Pradeep Varakantham