Random Reshuffling
Random reshuffling, a technique that randomly reorders data before processing in iterative algorithms, is a focus of current research in optimization, particularly within machine learning. Studies explore its impact on the convergence rates of various algorithms, including stochastic gradient descent and its variants, and examine its effectiveness across different problem classes (e.g., smooth/non-smooth, convex/non-convex) and distributed settings. This research aims to improve the efficiency and performance of optimization algorithms, leading to faster training times and potentially better generalization in machine learning models and impacting diverse applications from hyperparameter tuning to federated learning.
Papers
May 24, 2024
March 11, 2024
December 2, 2023
November 20, 2023
October 24, 2023
June 21, 2023
April 2, 2023
December 4, 2022
November 15, 2022
June 14, 2022
May 22, 2022
May 8, 2022
February 3, 2022
December 31, 2021
November 6, 2021