Faster Convergence
Faster convergence in machine learning aims to reduce the time and computational resources required to train models to a desired level of accuracy. Current research focuses on improving optimization algorithms (e.g., variants of Adam, SGD, and ADMM), developing novel sampling techniques for efficient data utilization (e.g., in federated learning and Physics-Informed Neural Networks), and leveraging architectural innovations (e.g., in Transformer networks and graph-based models) to accelerate training. These advancements are significant because faster convergence translates to reduced energy consumption, faster model deployment, and improved efficiency in various applications, from edge computing to large-scale deep learning.
Papers
February 27, 2023
February 13, 2023
February 1, 2023
December 21, 2022
December 18, 2022
December 3, 2022
October 31, 2022
October 14, 2022
September 29, 2022
September 21, 2022
July 19, 2022
June 9, 2022
June 7, 2022
June 6, 2022
May 22, 2022
April 26, 2022
April 23, 2022
March 11, 2022
March 10, 2022