Adversarial Perturbation
Adversarial perturbation research focuses on developing and mitigating the vulnerability of machine learning models to maliciously crafted inputs designed to cause misclassification or other errors. Current research emphasizes improving the robustness of various model architectures, including deep convolutional neural networks, vision transformers, and graph neural networks, often employing techniques like adversarial training, vector quantization, and optimal transport methods. This field is crucial for ensuring the reliability and security of AI systems across diverse applications, from image classification and face recognition to robotics and natural language processing, by identifying and addressing vulnerabilities to attacks.
Papers
Towards Evaluating the Robustness of Visual State Space Models
Hashmat Shadab Malik, Fahad Shamshad, Muzammal Naseer, Karthik Nandakumar, Fahad Shahbaz Khan, Salman Khan
Improving Adversarial Robustness via Feature Pattern Consistency Constraint
Jiacong Hu, Jingwen Ye, Zunlei Feng, Jiazhen Yang, Shunyu Liu, Xiaotian Yu, Lingxiang Jia, Mingli Song