Training Convergence
Training convergence in machine learning focuses on achieving efficient and accurate model training, a crucial aspect impacting both performance and resource utilization. Current research emphasizes improving convergence speed and stability across diverse settings, including federated learning (with algorithms like FedRank and AdaptSFL addressing client selection and resource constraints) and object detection (using techniques like hybrid pooling and novel loss functions). These advancements are significant because faster and more reliable convergence translates to reduced training time, lower energy consumption, and improved model accuracy across various applications, from recommendation systems to 3D registration.
Papers
October 30, 2024
October 15, 2024
September 13, 2024
May 7, 2024
March 19, 2024
January 18, 2024
January 2, 2024
September 14, 2023
April 15, 2023
February 20, 2023
October 28, 2022
January 21, 2022