Bregman Proximal
Bregman proximal methods are optimization algorithms leveraging Bregman divergences—generalizations of squared Euclidean distance—to solve challenging problems across diverse fields. Current research focuses on improving the efficiency and convergence guarantees of these methods, particularly within the contexts of gradient boosting machines, proximal gradient methods, and alternating minimization schemes for applications like optimal transport and image processing. This work addresses limitations of existing approaches, such as spurious stationary points and computational cost, leading to more robust and efficient solutions for problems in machine learning, computer vision, and other areas requiring complex optimization.
Papers
September 25, 2024
April 11, 2024
March 2, 2024
February 26, 2024
August 21, 2023
June 26, 2023
March 12, 2023
September 12, 2022
May 17, 2022
February 4, 2022