Normalizing Flow
Normalizing flows are a class of generative models that learn complex probability distributions by transforming a simple base distribution through a series of invertible transformations. Current research focuses on improving the efficiency and scalability of these flows, particularly for high-dimensional and multi-modal data, with advancements in architectures like continuous normalizing flows and the development of novel training algorithms such as flow matching. These models find applications across diverse fields, including image generation, Bayesian inference, and scientific modeling, offering advantages in density estimation, sampling, and uncertainty quantification.
Papers
Learning Electron Bunch Distribution along a FEL Beamline by Normalising Flows
Anna Willmann, Jurjen Couperus Cabadağ, Yen-Yu Chang, Richard Pausch, Amin Ghaith, Alexander Debus, Arie Irman, Michael Bussmann, Ulrich Schramm, Nico Hoffmann
Varianceflow: High-Quality and Controllable Text-to-Speech using Variance Information via Normalizing Flow
Yoonhyung Lee, Jinhyeok Yang, Kyomin Jung
Q-Flow: Generative Modeling for Differential Equations of Open Quantum Dynamics with Normalizing Flows
Owen Dugan, Peter Y. Lu, Rumen Dangovski, Di Luo, Marin Soljačić
Comparative Study of Coupling and Autoregressive Flows through Robust Statistical Tests
Andrea Coccaro, Marco Letizia, Humberto Reyes-Gonzalez, Riccardo Torre
Training Normalizing Flows with the Precision-Recall Divergence
Alexandre Verine, Benjamin Negrevergne, Muni Sreenivas Pydi, Yann Chevaleyre
Improving and generalizing flow-based generative models with minibatch optimal transport
Alexander Tong, Kilian Fatras, Nikolay Malkin, Guillaume Huguet, Yanlei Zhang, Jarrid Rector-Brooks, Guy Wolf, Yoshua Bengio
Normalizing Flow based Feature Synthesis for Outlier-Aware Object Detection
Nishant Kumar, Siniša Šegvić, Abouzar Eslami, Stefan Gumhold