Robust Overfitting
Robust overfitting in adversarial training describes the phenomenon where deep neural networks, while achieving near-perfect accuracy on adversarially perturbed training data, fail to generalize well to unseen, similarly perturbed test data. Current research focuses on mitigating this issue through techniques like data augmentation, label refinement, and novel regularization methods applied to various architectures including Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs). Understanding and overcoming robust overfitting is crucial for developing truly robust and reliable deep learning models, with significant implications for the security and trustworthiness of AI systems in real-world applications.
Papers
March 18, 2024
March 15, 2024
March 14, 2024
February 18, 2024
January 24, 2024
November 28, 2023
October 30, 2023
October 9, 2023
October 1, 2023
June 12, 2023
June 2, 2023
January 24, 2023
December 9, 2022
October 3, 2022
September 30, 2022
July 18, 2022
June 17, 2022
May 24, 2022
March 14, 2022