Global Convergence Guarantee
Global convergence guarantees in optimization aim to ensure algorithms reliably find the optimal solution, a crucial challenge in non-convex problems prevalent in machine learning. Current research focuses on establishing such guarantees for various algorithms, including policy gradient methods in reinforcement learning, federated learning schemes, and optimal transport solvers, often employing techniques like dynamical low-rank approximations or entropic regularization. These advancements improve the reliability and efficiency of training complex models, impacting fields like computer vision, multi-agent systems, and control systems by providing more robust and predictable performance.
Papers
October 10, 2024
June 25, 2024
May 30, 2024
March 15, 2024
October 8, 2023
May 29, 2023
March 6, 2023
February 11, 2023
February 2, 2023
December 20, 2022
December 10, 2022
September 15, 2022
April 22, 2022
February 14, 2022