Proximal Gradient

Proximal gradient methods are iterative optimization algorithms designed to efficiently minimize functions composed of a smooth and a nonsmooth part, a common structure in many machine learning and signal processing problems. Current research focuses on extending these methods to handle nonconvex and nonsmooth objectives, developing accelerated variants (e.g., using Nesterov acceleration or adaptive momentum), and adapting them to distributed and federated learning settings. These advancements are significant because they enable the solution of increasingly complex optimization problems arising in diverse applications, including image processing, reinforcement learning, and large-scale data analysis.

Papers