Robust Training
Robust training aims to develop machine learning models that are resilient to various forms of noise and uncertainty, including data imperfections, adversarial attacks, and distribution shifts. Current research focuses on improving model robustness across diverse applications and data types, employing techniques like adversarial training, data augmentation strategies (e.g., Mixup, diffusion-based variations), and novel loss functions that incorporate noise governance or margin maximization. These advancements are crucial for deploying reliable and trustworthy AI systems in real-world scenarios, particularly in safety-critical applications where model performance under uncertainty is paramount.
Papers
October 18, 2022
October 13, 2022
October 5, 2022
September 13, 2022
August 19, 2022
August 15, 2022
August 8, 2022
July 22, 2022
July 21, 2022
July 9, 2022
July 4, 2022
June 24, 2022
June 20, 2022
June 15, 2022
June 13, 2022
May 27, 2022
May 26, 2022
May 3, 2022
April 12, 2022