Parallel Optimization
Parallel optimization aims to accelerate the training and deployment of complex models by distributing computational tasks across multiple processors or machines. Current research focuses on improving the efficiency and stability of parallel algorithms like minibatch SGD and local SGD, as well as developing novel approaches such as concurrent optimization for multiple objectives and the use of advanced attention mechanisms in large language models. These advancements are crucial for tackling increasingly large datasets and complex problems in machine learning, impacting fields ranging from autonomous driving and robotics to natural language processing and scientific computing.
Papers
November 9, 2024
November 1, 2024
May 26, 2024
November 27, 2023
October 2, 2023
August 8, 2023
July 31, 2023
May 24, 2023
March 6, 2023
February 26, 2023
January 5, 2023
July 24, 2022
June 7, 2022
January 28, 2022
December 13, 2021