Distributionally Robust

Distributionally robust methods aim to create machine learning models and control systems that perform well even when the data or environment differs from the training conditions. Current research focuses on developing algorithms and theoretical frameworks for various applications, including reinforcement learning, inverse reinforcement learning, and classification, often employing techniques like Wasserstein distance, KL divergence, and total variation distance to define uncertainty sets. This work is significant because it addresses the critical issue of robustness in machine learning and control, leading to more reliable and dependable systems in real-world scenarios such as autonomous driving and robotics. The development of efficient algorithms with provable guarantees is a major focus of ongoing research.

Papers