\Beta$ VAE
Beta-VAEs (β-VAEs) are variational autoencoders modified to control the trade-off between the information content of a latent representation and the accuracy of data reconstruction. Current research focuses on improving the efficiency of β-VAE training, developing architectures like hierarchical and multi-rate β-VAEs to optimize this trade-off for various applications, and exploring the use of different latent space geometries, such as hyperbolic spaces. This work is significant because it allows for better control over the learned representations, leading to improved performance in tasks like disentanglement, speech enhancement, and robot trajectory modeling, ultimately advancing unsupervised learning techniques.
Papers
November 8, 2024
February 9, 2023
December 7, 2022
September 30, 2022
May 11, 2022
December 28, 2021
December 6, 2021