Proximal Gradient
Proximal gradient methods are iterative optimization algorithms designed to efficiently minimize functions composed of a smooth and a nonsmooth part, a common structure in many machine learning and signal processing problems. Current research focuses on extending these methods to handle nonconvex and nonsmooth objectives, developing accelerated variants (e.g., using Nesterov acceleration or adaptive momentum), and adapting them to distributed and federated learning settings. These advancements are significant because they enable the solution of increasingly complex optimization problems arising in diverse applications, including image processing, reinforcement learning, and large-scale data analysis.
Papers
Nest-DGIL: Nesterov-optimized Deep Geometric Incremental Learning for CS Image Reconstruction
Xiaohong Fan, Yin Yang, Ke Chen, Yujie Feng, Jianping Zhang
PNN: From proximal algorithms to robust unfolded image denoising networks and Plug-and-Play methods
Hoang Trieu Vy Le, Audrey Repetti, Nelly Pustelnik