Risk Minimization

Risk minimization in machine learning aims to develop models that minimize the expected loss or risk across various scenarios, including data heterogeneity and distribution shifts. Current research focuses on adapting risk minimization techniques to diverse settings, such as federated learning, multi-task learning, and domain generalization, often employing methods like Bayesian decision theory, regularization (including Wasserstein DRO), and data augmentation strategies. These advancements enhance model robustness, generalization, and fairness, impacting fields ranging from computer vision and natural language processing to personalized medicine and safe AI deployment.

Papers