Adversarial Perturbation
Adversarial perturbation research focuses on developing and mitigating the vulnerability of machine learning models to maliciously crafted inputs designed to cause misclassification or other errors. Current research emphasizes improving the robustness of various model architectures, including deep convolutional neural networks, vision transformers, and graph neural networks, often employing techniques like adversarial training, vector quantization, and optimal transport methods. This field is crucial for ensuring the reliability and security of AI systems across diverse applications, from image classification and face recognition to robotics and natural language processing, by identifying and addressing vulnerabilities to attacks.
423papers
Papers - Page 24
March 16, 2022
On the Convergence of Certified Robust Training with Interval Bound Propagation
Yihan Wang, Zhouxing Shi, Quanquan Gu, Cho-Jui HsiehWhat Do Adversarially trained Neural Networks Focus: A Fourier Domain-based Study
Binxiao Huang, Chaofan Tao, Rui Lin, Ngai WongPatch-Fool: Are Vision Transformers Always Robust Against Adversarial Perturbations?
Yonggan Fu, Shunyao Zhang, Shang Wu, Cheng Wan, Yingyan Celine Lin
March 14, 2022
March 12, 2022
March 11, 2022
March 9, 2022
March 8, 2022
February 28, 2022
February 26, 2022
February 23, 2022
February 10, 2022