Robust Training
Robust training aims to develop machine learning models that are resilient to various forms of noise and uncertainty, including data imperfections, adversarial attacks, and distribution shifts. Current research focuses on improving model robustness across diverse applications and data types, employing techniques like adversarial training, data augmentation strategies (e.g., Mixup, diffusion-based variations), and novel loss functions that incorporate noise governance or margin maximization. These advancements are crucial for deploying reliable and trustworthy AI systems in real-world scenarios, particularly in safety-critical applications where model performance under uncertainty is paramount.
Papers
October 31, 2024
September 14, 2024
September 12, 2024
September 4, 2024
August 29, 2024
July 8, 2024
May 28, 2024
April 11, 2024
February 27, 2024
February 22, 2024
February 14, 2024
January 26, 2024
November 8, 2023
October 26, 2023
October 20, 2023
October 17, 2023
October 10, 2023
October 2, 2023
September 29, 2023