Normalizing Constant
Normalizing constants are crucial scaling factors in probability distributions and various machine learning models, often appearing as integrals that are difficult to compute analytically. Current research focuses on efficient estimation techniques, employing methods like annealed importance sampling, noise-contrastive estimation, and Bayesian quadrature/optimization depending on the problem's characteristics, as well as exploring the inherent coupling between normalization and model parameters in continuous-time systems. Accurate and efficient computation of normalizing constants is vital for reliable probabilistic inference and model training across diverse fields, impacting areas from statistical physics to deep learning.
Papers
January 11, 2024
January 5, 2024
October 5, 2023
March 30, 2023
October 12, 2022
April 28, 2022
April 21, 2022