Adversarial Perturbation
Adversarial perturbation research focuses on developing and mitigating the vulnerability of machine learning models to maliciously crafted inputs designed to cause misclassification or other errors. Current research emphasizes improving the robustness of various model architectures, including deep convolutional neural networks, vision transformers, and graph neural networks, often employing techniques like adversarial training, vector quantization, and optimal transport methods. This field is crucial for ensuring the reliability and security of AI systems across diverse applications, from image classification and face recognition to robotics and natural language processing, by identifying and addressing vulnerabilities to attacks.
Papers
Efficient local linearity regularization to overcome catastrophic overfitting
Elias Abad Rocamora, Fanghui Liu, Grigorios G. Chrysos, Pablo M. Olmos, Volkan Cevher
How Robust Are Energy-Based Models Trained With Equilibrium Propagation?
Siddharth Mansingh, Michal Kucer, Garrett Kenyon, Juston Moore, Michael Teti
A GAN-based data poisoning framework against anomaly detection in vertical federated learning
Xiaolin Chen, Daoguang Zan, Wei Li, Bei Guan, Yongji Wang
Rethinking Impersonation and Dodging Attacks on Face Recognition Systems
Fengfan Zhou, Qianyu Zhou, Bangjie Yin, Hui Zheng, Xuequan Lu, Lizhuang Ma, Hefei Ling