Discrete Variational

Discrete variational autoencoders (VAEs) are generative models that learn to represent data using a discrete latent space, aiming for efficient and disentangled representations of underlying factors. Current research focuses on improving these models by addressing challenges like codebook collapse (where the model underutilizes the discrete codebook), enhancing disentanglement through inductive biases and quantization techniques, and developing efficient architectures such as those incorporating transformers or diffusion models for improved performance in tasks like image generation and reinforcement learning. These advancements are significant for various applications, including high-fidelity data synthesis, faster simulations (e.g., in high-energy physics), and more sample-efficient reinforcement learning agents.

Papers