Robust Generalization Gap
The robust generalization gap in deep learning refers to the significant discrepancy between a model's performance on adversarially perturbed training data and its performance on unseen, similarly perturbed test data. Current research focuses on understanding this gap through analyses of model stability, architectural choices (like comparing CNNs and Vision Transformers), and the impact of sparsity techniques during adversarial training. Addressing this gap is crucial for deploying robust machine learning models in real-world applications where adversarial attacks are a concern, as it directly impacts the reliability and trustworthiness of these systems.
Papers
October 10, 2024
January 6, 2024
September 28, 2022
May 27, 2022