Bregman Divergence
Bregman divergence measures the difference between two points in a space defined by a convex function, finding applications across machine learning and statistics. Current research focuses on leveraging Bregman divergences within optimization algorithms like mirror descent and bilevel optimization, particularly for handling non-convex problems and improving convergence rates. This generalized distance metric is proving valuable in diverse areas, including density ratio estimation, clustering, and federated learning, by providing a flexible framework for handling various loss functions and data geometries. The resulting algorithms often demonstrate improved performance and theoretical guarantees compared to traditional methods.
Papers
November 1, 2024
October 31, 2024
October 2, 2024
September 16, 2024
August 23, 2024
August 8, 2024
July 1, 2024
May 26, 2024
May 24, 2024
April 25, 2024
April 11, 2024
February 27, 2024
February 6, 2024
December 20, 2023
December 14, 2023
October 13, 2023
July 30, 2023
July 25, 2023
June 18, 2023