KL Divergence
Kullback-Leibler (KL) divergence quantifies the difference between two probability distributions, serving as a crucial tool in various machine learning applications. Current research focuses on refining KL divergence's use in areas like reinforcement learning (mitigating reward misspecification), knowledge distillation (improving efficiency and accuracy in transferring knowledge between models), and Bayesian neural networks (achieving well-defined variational inference). These advancements are improving model training, uncertainty quantification, and anomaly detection, impacting fields ranging from natural language processing to robotics and causal inference.
Papers
January 21, 2023
November 2, 2022
October 12, 2022
September 13, 2022
July 19, 2022
July 17, 2022
July 1, 2022
March 28, 2022
January 17, 2022
November 11, 2021