Generative Modeling
Generative modeling aims to create new data instances that resemble a given dataset, focusing on learning the underlying probability distribution. Current research emphasizes hybrid approaches combining the strengths of autoregressive models (for global context) and diffusion models (for high-quality local details), as well as advancements in flow-based models and score-based methods. These techniques are significantly impacting diverse fields, including image generation, 3D modeling, time series forecasting, and even scientific applications like molecular dynamics simulation and medical image synthesis, by enabling the creation of realistic and diverse synthetic data.
Papers
BGDB: Bernoulli-Gaussian Decision Block with Improved Denoising Diffusion Probabilistic Models
Chengkun Sun, Jinqian Pan, Russell Stevens Terry, Jiang Bian, Jie Xu
CF-GO-Net: A Universal Distribution Learner via Characteristic Function Networks with Graph Optimizers
Zeyang Yu, Shengxi Li, Danilo Mandic
Masked Diffusion Models are Secretly Time-Agnostic Masked Models and Exploit Inaccurate Categorical Sampling
Kaiwen Zheng, Yongxin Chen, Hanzi Mao, Ming-Yu Liu, Jun Zhu, Qinsheng Zhang
MarS: a Financial Market Simulation Engine Powered by Generative Foundation Model
Junjie Li, Yang Liu, Weiqing Liu, Shikai Fang, Lewen Wang, Chang Xu, Jiang Bian