Robust Adversarial Training
Robust adversarial training aims to improve the resilience of machine learning models, particularly deep neural networks, against adversarial attacks—malicious inputs designed to fool the model. Current research focuses on mitigating vulnerabilities arising from data corruption, label noise, and model updates that negatively impact robustness, employing techniques like hybrid adversarial training, constrained fine-tuning, and hierarchical regularization within various architectures including graph neural networks. These advancements are crucial for enhancing the reliability and security of machine learning systems across diverse applications, from computer vision to graph-based data analysis, where model robustness is paramount.
7papers
Papers
December 28, 2024
February 27, 2024
March 26, 2022