Robust Training

Robust training aims to develop machine learning models that are resilient to various forms of noise and uncertainty, including data imperfections, adversarial attacks, and distribution shifts. Current research focuses on improving model robustness across diverse applications and data types, employing techniques like adversarial training, data augmentation strategies (e.g., Mixup, diffusion-based variations), and novel loss functions that incorporate noise governance or margin maximization. These advancements are crucial for deploying reliable and trustworthy AI systems in real-world scenarios, particularly in safety-critical applications where model performance under uncertainty is paramount.

Papers