Faster Convergence
Faster convergence in machine learning aims to reduce the time and computational resources required to train models to a desired level of accuracy. Current research focuses on improving optimization algorithms (e.g., variants of Adam, SGD, and ADMM), developing novel sampling techniques for efficient data utilization (e.g., in federated learning and Physics-Informed Neural Networks), and leveraging architectural innovations (e.g., in Transformer networks and graph-based models) to accelerate training. These advancements are significant because faster convergence translates to reduced energy consumption, faster model deployment, and improved efficiency in various applications, from edge computing to large-scale deep learning.
Papers
February 2, 2024
January 24, 2024
January 18, 2024
January 6, 2024
December 27, 2023
December 19, 2023
December 6, 2023
December 4, 2023
November 23, 2023
October 25, 2023
October 17, 2023
October 6, 2023
September 29, 2023
September 17, 2023
August 21, 2023
August 17, 2023
July 12, 2023
June 21, 2023
June 2, 2023