Robust Training
Robust training aims to develop machine learning models that are resilient to various forms of noise and uncertainty, including data imperfections, adversarial attacks, and distribution shifts. Current research focuses on improving model robustness across diverse applications and data types, employing techniques like adversarial training, data augmentation strategies (e.g., Mixup, diffusion-based variations), and novel loss functions that incorporate noise governance or margin maximization. These advancements are crucial for deploying reliable and trustworthy AI systems in real-world scenarios, particularly in safety-critical applications where model performance under uncertainty is paramount.
Papers
March 28, 2022
March 16, 2022
February 28, 2022
February 18, 2022
February 2, 2022
January 29, 2022
January 28, 2022
January 15, 2022
December 1, 2021
November 19, 2021