Random Reshuffling

Random reshuffling, a technique that randomly reorders data before processing in iterative algorithms, is a focus of current research in optimization, particularly within machine learning. Studies explore its impact on the convergence rates of various algorithms, including stochastic gradient descent and its variants, and examine its effectiveness across different problem classes (e.g., smooth/non-smooth, convex/non-convex) and distributed settings. This research aims to improve the efficiency and performance of optimization algorithms, leading to faster training times and potentially better generalization in machine learning models and impacting diverse applications from hyperparameter tuning to federated learning.

Papers