Adversarial Attack
Adversarial attacks aim to deceive machine learning models by subtly altering input data, causing misclassifications or other erroneous outputs. Current research focuses on developing more robust models and detection methods, exploring various attack strategies across different model architectures (including vision transformers, recurrent neural networks, and graph neural networks) and data types (images, text, signals, and tabular data). Understanding and mitigating these attacks is crucial for ensuring the reliability and security of AI systems in diverse applications, from autonomous vehicles to medical diagnosis and cybersecurity.
Papers
Unscrambling the Rectification of Adversarial Attacks Transferability across Computer Networks
Ehsan Nowroozi, Samaneh Ghelichkhani, Imran Haider, Ali Dehghantanha
PubDef: Defending Against Transfer Attacks From Public Models
Chawin Sitawarin, Jaewon Chang, David Huang, Wesson Altoyan, David Wagner
Uncertainty-weighted Loss Functions for Improved Adversarial Attacks on Semantic Segmentation
Kira Maag, Asja Fischer
Diffusion-Based Adversarial Purification for Speaker Verification
Yibo Bai, Xiao-Lei Zhang
CT-GAT: Cross-Task Generative Adversarial Attack based on Transferability
Minxuan Lv, Chengwei Dai, Kun Li, Wei Zhou, Songlin Hu
Imperceptible CMOS camera dazzle for adversarial attacks on deep neural networks
Zvi Stein, Adrian Stern
Adversarial Attacks on Fairness of Graph Neural Networks
Binchi Zhang, Yushun Dong, Chen Chen, Yada Zhu, Minnan Luo, Jundong Li
Data-Free Knowledge Distillation Using Adversarially Perturbed OpenGL Shader Images
Logan Frank, Jim Davis
Beyond Hard Samples: Robust and Effective Grammatical Error Correction with Cycle Self-Augmenting
Zecheng Tang, Kaifeng Qi, Juntao Li, Min Zhang
REVAMP: Automated Simulations of Adversarial Attacks on Arbitrary Objects in Realistic Scenes
Matthew Hull, Zijie J. Wang, Duen Horng Chau
IRAD: Implicit Representation-driven Image Resampling against Adversarial Attacks
Yue Cao, Tianlin Li, Xiaofeng Cao, Ivor Tsang, Yang Liu, Qing Guo
Adversarial Training for Physics-Informed Neural Networks
Yao Li, Shengzhu Shi, Zhichang Guo, Boying Wu
Survey of Vulnerabilities in Large Language Models Revealed by Adversarial Attacks
Erfan Shayegani, Md Abdullah Al Mamun, Yu Fu, Pedram Zaree, Yue Dong, Nael Abu-Ghazaleh
A Non-monotonic Smooth Activation Function
Koushik Biswas, Meghana Karri, Ulaş Bağcı
Black-box Targeted Adversarial Attack on Segment Anything (SAM)
Sheng Zheng, Chaoning Zhang, Xinhong Hao