\Beta$ VAE

Beta-VAEs (β-VAEs) are variational autoencoders modified to control the trade-off between the information content of a latent representation and the accuracy of data reconstruction. Current research focuses on improving the efficiency of β-VAE training, developing architectures like hierarchical and multi-rate β-VAEs to optimize this trade-off for various applications, and exploring the use of different latent space geometries, such as hyperbolic spaces. This work is significant because it allows for better control over the learned representations, leading to improved performance in tasks like disentanglement, speech enhancement, and robot trajectory modeling, ultimately advancing unsupervised learning techniques.

Papers