Wasserstein Distributionally Robust
Wasserstein distributionally robust (WDR) methods aim to create machine learning models that are robust to uncertainty in the data distribution, specifically addressing variations within a Wasserstein ball around the observed data. Current research focuses on improving the robustness of WDR models to outliers and imbalanced data, developing efficient algorithms (e.g., using coresets and duality results) for large-scale problems, and extending WDR to various model architectures, including support vector machines and generative adversarial networks. This work is significant because it provides theoretically grounded and practically effective approaches for building more reliable and generalizable machine learning models in the face of real-world data imperfections.