Adversarial Perturbation
Adversarial perturbation research focuses on developing and mitigating the vulnerability of machine learning models to maliciously crafted inputs designed to cause misclassification or other errors. Current research emphasizes improving the robustness of various model architectures, including deep convolutional neural networks, vision transformers, and graph neural networks, often employing techniques like adversarial training, vector quantization, and optimal transport methods. This field is crucial for ensuring the reliability and security of AI systems across diverse applications, from image classification and face recognition to robotics and natural language processing, by identifying and addressing vulnerabilities to attacks.
Papers
MMAD-Purify: A Precision-Optimized Framework for Efficient and Scalable Multi-Modal Attacks
Xinxin Liu, Zhongliang Guo, Siyuan Huang, Chun Pong Lau
Boosting Imperceptibility of Stable Diffusion-based Adversarial Examples Generation with Momentum
Nashrah Haque, Xiang Li, Zhehui Chen, Yanzhao Wu, Lei Yu, Arun Iyengar, Wenqi Wei
Perseus: Leveraging Common Data Patterns with Curriculum Learning for More Robust Graph Neural Networks
Kaiwen Xia, Huijun Wu, Duanyu Li, Min Xie, Ruibo Wang, Wenzhe Zhang
DAT: Improving Adversarial Robustness via Generative Amplitude Mix-up in Frequency Domain
Fengpeng Li, Kemou Li, Haiwei Wu, Jinyu Tian, Jiantao Zhou
Efficient and Effective Universal Adversarial Attack against Vision-Language Pre-training Models
Fan Yang, Yihao Huang, Kailong Wang, Ling Shi, Geguang Pu, Yang Liu, Haoyu Wang
Automatically Generating Visual Hallucination Test Cases for Multimodal Large Language Models
Zhongye Liu, Hongbin Liu, Yuepeng Hu, Zedian Shao, Neil Zhenqiang Gong
Adversarially Guided Stateful Defense Against Backdoor Attacks in Federated Deep Learning
Hassan Ali, Surya Nepal, Salil S. Kanhere, Sanjay Jha