Adversarial Perturbation
Adversarial perturbation research focuses on developing and mitigating the vulnerability of machine learning models to maliciously crafted inputs designed to cause misclassification or other errors. Current research emphasizes improving the robustness of various model architectures, including deep convolutional neural networks, vision transformers, and graph neural networks, often employing techniques like adversarial training, vector quantization, and optimal transport methods. This field is crucial for ensuring the reliability and security of AI systems across diverse applications, from image classification and face recognition to robotics and natural language processing, by identifying and addressing vulnerabilities to attacks.
Papers
On Adversarial Robustness and Out-of-Distribution Robustness of Large Language Models
April Yang, Jordan Tab, Parth Shah, Paul Kotchavong
$\textrm{A}^{\textrm{2}}$RNet: Adversarial Attack Resilient Network for Robust Infrared and Visible Image Fusion
Jiawei Li, Hongwei Yu, Jiansheng Chen, Xinlong Ding, Jinlong Wang, Jinyuan Liu, Bochao Zou, Huimin Ma
Prompt2Perturb (P2P): Text-Guided Diffusion-Based Adversarial Attacks on Breast Ultrasound Images
Yasamin Medghalchi, Moein Heidari, Clayton Allard, Leonid Sigal, Ilker Hacihaliloglu
Real-time Identity Defenses against Malicious Personalization of Diffusion Models
Hanzhong Guo, Shen Nie, Chao Du, Tianyu Pang, Hao Sun, Chongxuan Li
TOAP: Towards Better Robustness in Universal Transferable Anti-Facial Retrieval
Yunna Lv, Long Tang, Dengpan Ye, Caiyun Xie, Jiacheng Deng, Yiheng He
On the Generation and Removal of Speaker Adversarial Perturbation for Voice-Privacy Protection
Chenyang Guo, Liping Chen, Zhuhai Li, Kong Aik Lee, Zhen-Hua Ling, Wu Guo
SVasP: Self-Versatility Adversarial Style Perturbation for Cross-Domain Few-Shot Learning
Wenqian Li, Pengfei Fang, Hui Xue