Divergence Estimation
Divergence estimation quantifies the difference between probability distributions, a crucial task across diverse fields. Current research focuses on developing robust and efficient divergence estimators, particularly for high-dimensional data and complex distributions, employing methods like Kullback-Leibler and Jensen-Shannon divergences, and incorporating techniques such as Gaussian smoothing and adaptive weighting schemes to improve accuracy and computational efficiency. These advancements have significant implications for various applications, including synthetic data validation, machine learning model training and evaluation, and signal processing in challenging environments.
Papers
September 23, 2024
September 10, 2024
May 13, 2024
April 4, 2024
April 3, 2024
October 12, 2023
July 30, 2023
October 10, 2022
September 27, 2022
December 8, 2021