Excess Risk
Excess risk, the difference between a model's error and the best achievable error, is a central concept in machine learning, driving research aimed at improving model accuracy and robustness. Current research focuses on developing algorithms and theoretical bounds for minimizing excess risk in various settings, including minimax optimization, PAC-Bayesian analysis, and distributionally robust optimization, often employing techniques like stochastic gradient descent and mirror descent. Understanding and controlling excess risk is crucial for building reliable and high-performing machine learning models across diverse applications, from classification and regression to natural language processing and reinforcement learning.
Papers
June 22, 2022
June 16, 2022
June 2, 2022
May 30, 2022
May 28, 2022