Empirical Risk Minimization
Empirical Risk Minimization (ERM) is a fundamental machine learning principle aiming to find models that minimize prediction errors on training data. Current research focuses on improving ERM's robustness and generalization ability through techniques like regularization (including relative entropy and f-divergences), distributionally robust optimization, and model aggregation strategies that prioritize variance reduction over error minimization. These advancements address challenges such as overfitting, adversarial attacks, and fairness concerns, leading to more reliable and trustworthy machine learning models with broader applicability across diverse fields.
Papers
January 14, 2025
January 4, 2025
December 30, 2024
December 17, 2024
December 5, 2024
December 3, 2024
November 26, 2024
November 18, 2024
November 7, 2024
November 5, 2024
November 4, 2024
October 31, 2024
October 30, 2024
October 25, 2024
October 22, 2024
October 13, 2024
October 2, 2024