Local Convergence
Local convergence in optimization focuses on analyzing the behavior of algorithms near optimal solutions, aiming to establish convergence rates and conditions for guaranteed convergence. Current research investigates this in various contexts, including gradient descent methods for diverse models like neural networks and min-max games, as well as second-order methods and algorithms operating on Riemannian manifolds. Understanding local convergence is crucial for developing efficient and reliable optimization algorithms across numerous fields, from machine learning and signal processing to control theory and game theory, impacting both theoretical understanding and practical applications.
Papers
October 12, 2024
October 11, 2024
July 29, 2024
July 14, 2024
June 22, 2024
June 7, 2024
June 3, 2024
May 27, 2024
May 22, 2024
November 26, 2023
October 21, 2023
October 11, 2023
September 12, 2023
May 31, 2023
May 26, 2023
May 14, 2023
April 18, 2023
March 3, 2023