Divergence Optimization
Divergence optimization focuses on minimizing the difference between probability distributions, a crucial task in machine learning for aligning models with desired behaviors or transferring knowledge across datasets. Current research explores this through various divergences (e.g., KL-divergence, Jensen-Shannon divergence, f-divergences) within diverse models, including generative adversarial networks (GANs), and reinforcement learning frameworks. These techniques are vital for improving the performance and robustness of machine learning algorithms in applications ranging from image generation and domain adaptation to aligning language models with human preferences and optimizing data discretization for classification.
Papers
March 14, 2024
February 12, 2024
April 20, 2023
February 16, 2023
November 16, 2022
September 26, 2022
September 20, 2022