Generative Modeling
Generative modeling aims to create new data instances that resemble a given dataset, focusing on learning the underlying probability distribution. Current research emphasizes hybrid approaches combining the strengths of autoregressive models (for global context) and diffusion models (for high-quality local details), as well as advancements in flow-based models and score-based methods. These techniques are significantly impacting diverse fields, including image generation, 3D modeling, time series forecasting, and even scientific applications like molecular dynamics simulation and medical image synthesis, by enabling the creation of realistic and diverse synthetic data.
Papers
Distillation of Discrete Diffusion through Dimensional Correlations
Satoshi Hayakawa, Yuhta Takida, Masaaki Imaizumi, Hiromi Wakaki, Yuki Mitsufuji
Score Neural Operator: A Generative Model for Learning and Generalizing Across Multiple Probability Distributions
Xinyu Liao, Aoyang Qin, Jacob Seidman, Junqi Wang, Wei Wang, Paris Perdikaris
Vector Grimoire: Codebook-based Shape Generation under Raster Image Supervision
Moritz Feuerpfeil, Marco Cipriano, Gerard de Melo
Pyramidal Flow Matching for Efficient Video Generative Modeling
Yang Jin, Zhicheng Sun, Ningyuan Li, Kun Xu, Kun Xu, Hao Jiang, Nan Zhuang, Quzhe Huang, Yang Song, Yadong Mu, Zhouchen Lin
Flow Matching with Gaussian Process Priors for Probabilistic Time Series Forecasting
Marcel Kollovieh, Marten Lienen, David Lüdke, Leo Schwinn, Stephan Günnemann
Local Flow Matching Generative Models
Chen Xu, Xiuyuan Cheng, Yao Xie
Plug-and-Play Controllable Generation for Discrete Masked Models
Wei Guo, Yuchen Zhu, Molei Tao, Yongxin Chen
Bellman Diffusion: Generative Modeling as Learning a Linear Operator in the Distribution Space
Yangming Li, Chieh-Hsin Lai, Carola-Bibiane Schönlieb, Yuki Mitsufuji, Stefano Ermon
Conformal Generative Modeling with Improved Sample Efficiency through Sequential Greedy Filtering
Klaus-Rudolf Kladny, Bernhard Schölkopf, Michael Muehlebach
LaGeM: A Large Geometry Model for 3D Representation Learning and Diffusion
Biao Zhang, Peter Wonka
Denoising with a Joint-Embedding Predictive Architecture
Dengsheng Chen, Jie Hu, Xiaoming Wei, Enhua Wu