Generative Modeling
Generative modeling aims to create new data instances that resemble a given dataset, focusing on learning the underlying probability distribution. Current research emphasizes hybrid approaches combining the strengths of autoregressive models (for global context) and diffusion models (for high-quality local details), as well as advancements in flow-based models and score-based methods. These techniques are significantly impacting diverse fields, including image generation, 3D modeling, time series forecasting, and even scientific applications like molecular dynamics simulation and medical image synthesis, by enabling the creation of realistic and diverse synthetic data.
Papers
Conformal Generative Modeling with Improved Sample Efficiency through Sequential Greedy Filtering
Klaus-Rudolf Kladny, Bernhard Schölkopf, Michael Muehlebach
LaGeM: A Large Geometry Model for 3D Representation Learning and Diffusion
Biao Zhang, Peter Wonka
Denoising with a Joint-Embedding Predictive Architecture
Dengsheng Chen, Jie Hu, Xiaoming Wei, Enhua Wu
MIMII-Gen: Generative Modeling Approach for Simulated Evaluation of Anomalous Sound Detection System
Harsh Purohit, Tomoya Nishida, Kota Dohi, Takashi Endo, Yohei Kawaguchi
Token Caching for Diffusion Transformer Acceleration
Jinming Lou, Wenyang Luo, Yufan Liu, Bing Li, Xinmiao Ding, Weiming Hu, Jiajiong Cao, Yuming Li, Chenguang Ma
BGDB: Bernoulli-Gaussian Decision Block with Improved Denoising Diffusion Probabilistic Models
Chengkun Sun, Jinqian Pan, Russell Stevens Terry, Jiang Bian, Jie Xu
CF-GO-Net: A Universal Distribution Learner via Characteristic Function Networks with Graph Optimizers
Zeyang Yu, Shengxi Li, Danilo Mandic