Generative Flow Network
Generative Flow Networks (GFlowNets) are a class of generative models designed to sample complex objects from unnormalized probability distributions, often proportional to a reward function, by framing the sampling process as a sequential decision-making problem. Current research focuses on improving training efficiency and exploration strategies, particularly through novel loss functions, meta-learning techniques, and integration with methods like Monte Carlo Tree Search and reinforcement learning, including exploration of both discrete and continuous state spaces and multi-agent scenarios. GFlowNets offer a powerful alternative to traditional methods for generating diverse, high-quality samples in various domains, impacting fields like drug discovery, materials science, and combinatorial optimization by enabling efficient exploration of vast search spaces.
Papers
Investigating Generalization Behaviours of Generative Flow Networks
Lazar Atanackovic, Emmanuel Bengio
QGFN: Controllable Greediness with Action Values
Elaine Lau, Stephen Zhewen Lu, Ling Pan, Doina Precup, Emmanuel Bengio
Improved off-policy training of diffusion samplers
Marcin Sendera, Minsu Kim, Sarthak Mittal, Pablo Lemos, Luca Scimeca, Jarrid Rector-Brooks, Alexandre Adam, Yoshua Bengio, Nikolay Malkin
Probabilistic Generative Modeling for Procedural Roundabout Generation for Developing Countries
Zarif Ikram, Ling Pan, Dianbo Liu
Pre-Training and Fine-Tuning Generative Flow Networks
Ling Pan, Moksh Jain, Kanika Madan, Yoshua Bengio
Learning Energy Decompositions for Partial Inference of GFlowNets
Hyosoon Jang, Minsu Kim, Sungsoo Ahn
Expected flow networks in stochastic environments and two-player zero-sum games
Marco Jiralerspong, Bilun Sun, Danilo Vucetic, Tianyu Zhang, Yoshua Bengio, Gauthier Gidel, Nikolay Malkin
Local Search GFlowNets
Minsu Kim, Taeyoung Yun, Emmanuel Bengio, Dinghuai Zhang, Yoshua Bengio, Sungsoo Ahn, Jinkyoo Park