KL Divergence
Kullback-Leibler (KL) divergence quantifies the difference between two probability distributions, serving as a crucial tool in various machine learning applications. Current research focuses on refining KL divergence's use in areas like reinforcement learning (mitigating reward misspecification), knowledge distillation (improving efficiency and accuracy in transferring knowledge between models), and Bayesian neural networks (achieving well-defined variational inference). These advancements are improving model training, uncertainty quantification, and anomaly detection, impacting fields ranging from natural language processing to robotics and causal inference.
Papers
November 12, 2024
October 31, 2024
October 29, 2024
October 15, 2024
July 19, 2024
June 28, 2024
June 12, 2024
June 6, 2024
May 23, 2024
April 30, 2024
February 20, 2024
January 12, 2024
January 3, 2024
January 2, 2024
October 27, 2023
July 21, 2023
July 19, 2023
March 14, 2023
March 1, 2023