Normalizing Flow
Normalizing flows are a class of generative models that learn complex probability distributions by transforming a simple base distribution through a series of invertible transformations. Current research focuses on improving the efficiency and scalability of these flows, particularly for high-dimensional and multi-modal data, with advancements in architectures like continuous normalizing flows and the development of novel training algorithms such as flow matching. These models find applications across diverse fields, including image generation, Bayesian inference, and scientific modeling, offering advantages in density estimation, sampling, and uncertainty quantification.
Papers
Boosting Summarization with Normalizing Flows and Aggressive Training
Yu Yang, Xiaotong Shen
Flexible Tails for Normalising Flows, with Application to the Modelling of Financial Return Data
Tennessee Hickling, Dennis Prangle
Uncertainty quantification and out-of-distribution detection using surjective normalizing flows
Simon Dirmeier, Ye Hong, Yanan Xin, Fernando Perez-Cruz
Simulation-based Inference for Exoplanet Atmospheric Retrieval: Insights from winning the Ariel Data Challenge 2023 using Normalizing Flows
Mayeul Aubin, Carolina Cuesta-Lazaro, Ethan Tregidga, Javier Viaña, Cecilia Garraffo, Iouli E. Gordon, Mercedes López-Morales, Robert J. Hargreaves, Vladimir Yu. Makhnev, Jeremy J. Drake, Douglas P. Finkbeiner, Phillip Cargile
Data-driven Modeling and Inference for Bayesian Gaussian Process ODEs via Double Normalizing Flows
Jian Xu, Shian Du, Junmei Yang, Xinghao Ding, John Paisley, Delu Zeng