Proximal Gradient
Proximal gradient methods are iterative optimization algorithms designed to efficiently minimize functions composed of a smooth and a nonsmooth part, a common structure in many machine learning and signal processing problems. Current research focuses on extending these methods to handle nonconvex and nonsmooth objectives, developing accelerated variants (e.g., using Nesterov acceleration or adaptive momentum), and adapting them to distributed and federated learning settings. These advancements are significant because they enable the solution of increasingly complex optimization problems arising in diverse applications, including image processing, reinforcement learning, and large-scale data analysis.
Papers
November 12, 2024
November 10, 2024
September 28, 2024
August 29, 2024
August 16, 2024
August 10, 2024
July 1, 2024
June 5, 2024
May 24, 2024
May 15, 2024
April 15, 2024
April 1, 2024
March 24, 2024
February 9, 2024
February 7, 2024
January 29, 2024
January 23, 2024
November 30, 2023