Implicit Generative
Implicit generative models aim to create realistic synthetic data without explicitly defining the underlying probability distribution, focusing instead on learning transformations that map from a simple prior distribution to the target data distribution. Current research emphasizes developing more stable and efficient training methods, exploring architectures like generative adversarial networks (GANs) and score-based diffusion models, and investigating novel loss functions to overcome challenges like mode collapse and unstable training. This field is significant because it enables data augmentation, improved uncertainty quantification in Bayesian neural networks, and the generation of high-quality synthetic data for various applications, including 3D shape modeling, video generation, and robotic grasping.
Papers
Partial Identification of Treatment Effects with Implicit Generative Models
Vahid Balazadeh, Vasilis Syrgkanis, Rahul G. Krishnan
Quantifying Quality of Class-Conditional Generative Models in Time-Series Domain
Alireza Koochali, Maria Walch, Sankrutyayan Thota, Peter Schichtel, Andreas Dengel, Sheraz Ahmed