Empirical Risk Minimization
Empirical Risk Minimization (ERM) is a fundamental machine learning principle aiming to find models that minimize prediction errors on training data. Current research focuses on improving ERM's robustness and generalization ability through techniques like regularization (including relative entropy and f-divergences), distributionally robust optimization, and model aggregation strategies that prioritize variance reduction over error minimization. These advancements address challenges such as overfitting, adversarial attacks, and fairness concerns, leading to more reliable and trustworthy machine learning models with broader applicability across diverse fields.
Papers
November 7, 2024
November 5, 2024
November 4, 2024
October 31, 2024
October 30, 2024
October 25, 2024
October 22, 2024
October 13, 2024
October 2, 2024
September 28, 2024
September 25, 2024
September 15, 2024
September 13, 2024
September 10, 2024
August 28, 2024
August 26, 2024
August 19, 2024