Bregman Divergence
Bregman divergence measures the difference between two points in a space defined by a convex function, finding applications across machine learning and statistics. Current research focuses on leveraging Bregman divergences within optimization algorithms like mirror descent and bilevel optimization, particularly for handling non-convex problems and improving convergence rates. This generalized distance metric is proving valuable in diverse areas, including density ratio estimation, clustering, and federated learning, by providing a flexible framework for handling various loss functions and data geometries. The resulting algorithms often demonstrate improved performance and theoretical guarantees compared to traditional methods.
Papers
May 25, 2023
April 16, 2023
November 19, 2022
November 18, 2022
November 4, 2022
October 21, 2022
September 15, 2022
August 26, 2022
June 22, 2022
June 17, 2022
June 9, 2022
May 17, 2022
February 23, 2022
February 8, 2022
January 18, 2022
December 7, 2021