Importance Distribution

Importance distribution research focuses on optimizing the selection of data samples for improved efficiency in various machine learning tasks, primarily by reducing noise in gradient estimations and enhancing the accuracy of rare event probability calculations. Current research explores adaptive and multiple importance sampling techniques, often employing neural networks (e.g., within Beran-based models) or tensor train decompositions to approximate optimal importance distributions. These advancements lead to faster training convergence in models and more reliable estimations in high-dimensional problems, impacting fields like Bayesian inference and survival analysis.

Papers