Distributionally Robust Optimization

Distributionally robust optimization (DRO) is a framework for training machine learning models that are robust to uncertainty in the data distribution, aiming to minimize the worst-case risk across a set of possible distributions. Current research focuses on developing efficient algorithms for various DRO formulations, including those based on Wasserstein distance, f-divergences, and other metrics, often incorporating techniques like variance reduction and primal-dual methods to improve scalability and convergence. DRO's significance lies in its ability to improve the generalization and fairness of models, particularly in applications with limited data, noisy labels, or significant distributional shifts, leading to more reliable and equitable decision-making in diverse fields.

Papers