Faster Convergence
Faster convergence in machine learning aims to reduce the time and computational resources required to train models to a desired level of accuracy. Current research focuses on improving optimization algorithms (e.g., variants of Adam, SGD, and ADMM), developing novel sampling techniques for efficient data utilization (e.g., in federated learning and Physics-Informed Neural Networks), and leveraging architectural innovations (e.g., in Transformer networks and graph-based models) to accelerate training. These advancements are significant because faster convergence translates to reduced energy consumption, faster model deployment, and improved efficiency in various applications, from edge computing to large-scale deep learning.
Papers
October 27, 2024
October 23, 2024
October 22, 2024
October 13, 2024
October 10, 2024
October 8, 2024
September 24, 2024
September 12, 2024
August 26, 2024
July 10, 2024
June 30, 2024
June 14, 2024
April 11, 2024
April 3, 2024
March 27, 2024
March 22, 2024
March 21, 2024
March 7, 2024
March 6, 2024
February 2, 2024