Robust Adversarial Training
Robust adversarial training aims to improve the resilience of machine learning models, particularly deep neural networks, against adversarial attacks—malicious inputs designed to fool the model. Current research focuses on mitigating vulnerabilities arising from data corruption, label noise, and model updates that negatively impact robustness, employing techniques like hybrid adversarial training, constrained fine-tuning, and hierarchical regularization within various architectures including graph neural networks. These advancements are crucial for enhancing the reliability and security of machine learning systems across diverse applications, from computer vision to graph-based data analysis, where model robustness is paramount.
Papers
May 7, 2024
February 27, 2024
December 8, 2023
May 30, 2022
March 26, 2022
February 18, 2022