Wasserstein Uncertainty
Wasserstein uncertainty focuses on developing robust machine learning models that are resilient to uncertainties in data distributions, often represented as Wasserstein balls around an empirical distribution. Current research emphasizes developing efficient algorithms for solving distributionally robust optimization problems, particularly using techniques like Wasserstein proximal gradient flows and incorporating Wasserstein distances into generative models for uncertainty quantification. This approach enhances model robustness against adversarial attacks, data scarcity, and domain shifts, with applications spanning fault diagnosis, reinforcement learning, and adversarial training. The resulting methods improve the generalization and reliability of machine learning models in various real-world scenarios.