Empirical Risk Minimization
Empirical Risk Minimization (ERM) is a fundamental machine learning principle aiming to find models that minimize prediction errors on training data. Current research focuses on improving ERM's robustness and generalization ability through techniques like regularization (including relative entropy and f-divergences), distributionally robust optimization, and model aggregation strategies that prioritize variance reduction over error minimization. These advancements address challenges such as overfitting, adversarial attacks, and fairness concerns, leading to more reliable and trustworthy machine learning models with broader applicability across diverse fields.
Papers
June 15, 2022
June 14, 2022
June 13, 2022
June 12, 2022
June 7, 2022
June 6, 2022
June 3, 2022
June 2, 2022
June 1, 2022
May 29, 2022
May 26, 2022
May 23, 2022
May 22, 2022
April 16, 2022
April 4, 2022
March 7, 2022
March 5, 2022
February 24, 2022
February 23, 2022