Excess Risk
Excess risk, the difference between a model's error and the best achievable error, is a central concept in machine learning, driving research aimed at improving model accuracy and robustness. Current research focuses on developing algorithms and theoretical bounds for minimizing excess risk in various settings, including minimax optimization, PAC-Bayesian analysis, and distributionally robust optimization, often employing techniques like stochastic gradient descent and mirror descent. Understanding and controlling excess risk is crucial for building reliable and high-performing machine learning models across diverse applications, from classification and regression to natural language processing and reinforcement learning.
Papers
August 22, 2024
August 16, 2024
August 4, 2024
June 19, 2024
May 23, 2024
March 6, 2024
February 6, 2024
February 3, 2024
January 29, 2024
January 22, 2024
December 20, 2023
July 31, 2023
May 31, 2023
May 18, 2023
February 20, 2023
February 15, 2023
February 13, 2023
October 20, 2022