Parallel Stochastic
Parallel stochastic methods aim to accelerate optimization algorithms, particularly stochastic gradient descent (SGD), by distributing computations across multiple processors. Current research focuses on improving the efficiency and convergence rates of parallel SGD, addressing challenges like communication bottlenecks and straggler effects through techniques such as hybrid synchronization strategies, mini-batching, and gradient compression. These advancements are crucial for training large-scale machine learning models and solving complex optimization problems in various fields, including image processing, natural language processing, and graph analysis, enabling faster and more scalable solutions.
Papers
September 13, 2024
June 27, 2024
June 11, 2024
January 17, 2024
October 19, 2023
October 3, 2023
November 2, 2022
September 7, 2022
July 29, 2022
March 21, 2022