Smoothed Online Learning
Smoothed online learning bridges the gap between traditional i.i.d. and fully adversarial online learning by assuming data at each time step is drawn from a distribution with bounded density relative to a base measure. Current research focuses on developing algorithms with sublinear regret, particularly addressing scenarios where the base measure is unknown and exploring the computational complexity of achieving optimal regret, including the use of Empirical Risk Minimization (ERM) and Follow-the-Perturbed-Leader. These advancements aim to improve the efficiency and robustness of online learning algorithms, impacting fields like online prediction, control systems, and sequential decision-making where adversarial assumptions may be overly pessimistic.