Probabilistic Generative Model
Probabilistic generative models aim to learn the underlying probability distribution of data to generate new, similar data samples. Current research emphasizes improving the efficiency and accuracy of these models, focusing on architectures like variational autoencoders, diffusion models, and generative flow networks, as well as advanced inference techniques such as variational inference and Markov Chain Monte Carlo methods. These models find applications across diverse fields, including natural language processing, image synthesis, drug discovery, and social science modeling, offering powerful tools for data analysis and generation in various domains. Challenges remain in addressing issues like distribution mismatch in datasets and efficiently handling high-dimensional data.
Papers
ProbNeRF: Uncertainty-Aware Inference of 3D Shapes from 2D Images
Matthew D. Hoffman, Tuan Anh Le, Pavel Sountsov, Christopher Suter, Ben Lee, Vikash K. Mansinghka, Rif A. Saurous
Modeling Inter-Dependence Between Time and Mark in Multivariate Temporal Point Processes
Govind Waghmare, Ankur Debnath, Siddhartha Asthana, Aakarsh Malhotra