Kernel Density
Kernel density estimation (KDE) is a non-parametric method for estimating the probability density function of a random variable, aiming to reconstruct the underlying data distribution from a set of samples. Current research focuses on improving KDE's efficiency and robustness in high-dimensional spaces, addressing challenges like computational complexity and sensitivity to outliers through techniques such as adaptive bandwidth selection, mixture models (e.g., Epanechnikov kernels), and the integration of KDE with other machine learning methods (e.g., variational autoencoders, transformers). These advancements enhance KDE's applicability in diverse fields, including anomaly detection, domain adaptation in machine learning, and improved accuracy in various statistical analyses.
Papers
Marginal Post Processing of Bayesian Inference Products with Normalizing Flows and Kernel Density Estimators
Harry T. J. Bevins, William J. Handley, Pablo Lemos, Peter H. Sims, Eloy de Lera Acedo, Anastasia Fialkov, Justin Alsing
Towards Symbolic Time Series Representation Improved by Kernel Density Estimators
Matej Kloska, Viera Rozinajova