Robust Training
Robust training aims to develop machine learning models that are resilient to various forms of noise and uncertainty, including data imperfections, adversarial attacks, and distribution shifts. Current research focuses on improving model robustness across diverse applications and data types, employing techniques like adversarial training, data augmentation strategies (e.g., Mixup, diffusion-based variations), and novel loss functions that incorporate noise governance or margin maximization. These advancements are crucial for deploying reliable and trustworthy AI systems in real-world scenarios, particularly in safety-critical applications where model performance under uncertainty is paramount.
Papers
Exploring and Exploiting Decision Boundary Dynamics for Adversarial Robustness
Yuancheng Xu, Yanchao Sun, Micah Goldblum, Tom Goldstein, Furong Huang
Bitrate-Constrained DRO: Beyond Worst Case Robustness To Unknown Group Shifts
Amrith Setlur, Don Dennis, Benjamin Eysenbach, Aditi Raghunathan, Chelsea Finn, Virginia Smith, Sergey Levine