Convergence Property
Convergence properties in machine learning and optimization are a central focus of current research, aiming to understand and improve the speed and reliability of algorithms in reaching optimal solutions. Active areas include analyzing the convergence of stochastic gradient descent (SGD) variants like random reshuffling, and exploring novel algorithms such as fractional gradient descent and those operating on Riemannian manifolds, often within federated learning frameworks. These investigations are crucial for developing more efficient and robust machine learning models and optimization techniques across diverse applications, from image reconstruction to distributionally robust optimization.
Papers
April 29, 2024
April 15, 2024
February 8, 2024
December 5, 2023
November 30, 2023
April 2, 2023
March 3, 2023
October 11, 2022
September 17, 2022
June 29, 2022
May 27, 2022
April 8, 2022