Proximal Gradient
Proximal gradient methods are iterative optimization algorithms designed to efficiently minimize functions composed of a smooth and a nonsmooth part, a common structure in many machine learning and signal processing problems. Current research focuses on extending these methods to handle nonconvex and nonsmooth objectives, developing accelerated variants (e.g., using Nesterov acceleration or adaptive momentum), and adapting them to distributed and federated learning settings. These advancements are significant because they enable the solution of increasingly complex optimization problems arising in diverse applications, including image processing, reinforcement learning, and large-scale data analysis.
Papers
October 6, 2022
September 5, 2022
September 3, 2022
June 14, 2022
May 20, 2022
May 11, 2022
March 30, 2022
March 29, 2022
March 22, 2022
March 4, 2022
February 28, 2022
February 2, 2022
January 31, 2022
January 13, 2022
December 20, 2021
November 12, 2021
November 11, 2021
November 6, 2021