Empirical Risk Minimization
Empirical Risk Minimization (ERM) is a fundamental machine learning principle aiming to find models that minimize prediction errors on training data. Current research focuses on improving ERM's robustness and generalization ability through techniques like regularization (including relative entropy and f-divergences), distributionally robust optimization, and model aggregation strategies that prioritize variance reduction over error minimization. These advancements address challenges such as overfitting, adversarial attacks, and fairness concerns, leading to more reliable and trustworthy machine learning models with broader applicability across diverse fields.
Papers
October 28, 2023
October 26, 2023
October 23, 2023
October 21, 2023
September 25, 2023
September 24, 2023
September 21, 2023
September 20, 2023
September 19, 2023
September 15, 2023
September 7, 2023
August 28, 2023
August 4, 2023
August 1, 2023
July 26, 2023
July 24, 2023
July 5, 2023
June 21, 2023