Paper ID: 2410.03760

On the SAGA algorithm with decreasing step

Luis Fredes (IMB), Bernard Bercu (IMB), Eméric Gbaguidi (IMB)

Stochastic optimization naturally appear in many application areas, including machine learning. Our goal is to go further in the analysis of the Stochastic Average Gradient Accelerated (SAGA) algorithm. To achieve this, we introduce a new $\lambda$-SAGA algorithm which interpolates between the Stochastic Gradient Descent ($\lambda=0$) and the SAGA algorithm ($\lambda=1$). Firstly, we investigate the almost sure convergence of this new algorithm with decreasing step which allows us to avoid the restrictive strong convexity and Lipschitz gradient hypotheses associated to the objective function. Secondly, we establish a central limit theorem for the $\lambda$-SAGA algorithm. Finally, we provide the non-asymptotic $\mathbb{L}^p$ rates of convergence.

Submitted: Oct 2, 2024