Robust Risk
Robust risk research focuses on developing machine learning models and statistical methods that are resilient to uncertainties and adversarial attacks, aiming to minimize worst-case performance degradation. Current efforts concentrate on developing computationally efficient algorithms for distributionally robust optimization, often employing techniques like Wasserstein distance or Sinkhorn divergence to quantify uncertainty, and exploring the benefits of incorporating regularization or local queries to enhance robustness. This work is crucial for deploying reliable machine learning systems in safety-critical applications and improving the generalizability and trustworthiness of models across diverse real-world scenarios.
Papers
August 19, 2024
March 21, 2024
February 26, 2023
January 3, 2023