Follow the Regularized Leader

Follow-the-Regularized-Leader (FTRL) is a powerful online learning framework aiming to minimize cumulative loss over time by iteratively selecting actions that optimize a regularized objective function. Current research focuses on improving FTRL's efficiency and adaptability, particularly through exploring variations like Follow-the-Perturbed-Leader (FTPL) and incorporating adaptive learning rates and regularizers to achieve optimal performance across stochastic and adversarial environments, including applications with delayed feedback and constraints. These advancements enhance FTRL's applicability to diverse problems, such as online advertising, multi-armed bandits, and reinforcement learning, leading to improved algorithms with better regret bounds and practical performance.

Papers