Divergence Optimization

Divergence optimization focuses on minimizing the difference between probability distributions, a crucial task in machine learning for aligning models with desired behaviors or transferring knowledge across datasets. Current research explores this through various divergences (e.g., KL-divergence, Jensen-Shannon divergence, f-divergences) within diverse models, including generative adversarial networks (GANs), and reinforcement learning frameworks. These techniques are vital for improving the performance and robustness of machine learning algorithms in applications ranging from image generation and domain adaptation to aligning language models with human preferences and optimizing data discretization for classification.

Papers