Normalizing Flow
Normalizing flows are a class of generative models that learn complex probability distributions by transforming a simple base distribution through a series of invertible transformations. Current research focuses on improving the efficiency and scalability of these flows, particularly for high-dimensional and multi-modal data, with advancements in architectures like continuous normalizing flows and the development of novel training algorithms such as flow matching. These models find applications across diverse fields, including image generation, Bayesian inference, and scientific modeling, offering advantages in density estimation, sampling, and uncertainty quantification.
Papers
Inductive Simulation of Calorimeter Showers with Normalizing Flows
Matthew R. Buckley, Claudius Krause, Ian Pang, David Shih
Computing high-dimensional optimal transport by flow neural networks
Chen Xu, Xiuyuan Cheng, Yao Xie
Improving Multimodal Joint Variational Autoencoders through Normalizing Flows and Correlation Analysis
Agathe Senellart, Clément Chadebec, Stéphanie Allassonnière
Tensorizing flows: a tool for variational inference
Yuehaw Khoo, Michael Lindsey, Hongli Zhao
Normalizing flows for lattice gauge theory in arbitrary space-time dimension
Ryan Abbott, Michael S. Albergo, Aleksandar Botev, Denis Boyda, Kyle Cranmer, Daniel C. Hackett, Gurtej Kanwar, Alexander G. D. G. Matthews, Sébastien Racanière, Ali Razavi, Danilo J. Rezende, Fernando Romero-López, Phiala E. Shanahan, Julian M. Urban