Empirical Risk Minimization
Empirical Risk Minimization (ERM) is a fundamental machine learning principle aiming to find models that minimize prediction errors on training data. Current research focuses on improving ERM's robustness and generalization ability through techniques like regularization (including relative entropy and f-divergences), distributionally robust optimization, and model aggregation strategies that prioritize variance reduction over error minimization. These advancements address challenges such as overfitting, adversarial attacks, and fairness concerns, leading to more reliable and trustworthy machine learning models with broader applicability across diverse fields.
Papers
May 27, 2023
May 24, 2023
May 20, 2023
May 16, 2023
May 9, 2023
April 28, 2023
April 26, 2023
April 23, 2023
April 22, 2023
April 4, 2023
March 29, 2023
March 17, 2023
March 6, 2023
February 27, 2023
February 21, 2023
February 17, 2023
February 16, 2023
February 11, 2023
February 9, 2023