Vq Vae
Vector-Quantized Variational Autoencoders (VQ-VAEs) are a class of deep learning models aiming to learn discrete representations of data, enabling efficient compression, generation, and analysis. Current research focuses on improving VQ-VAE's adaptability to varying data scales and rates, enhancing their performance in specific applications like human motion synthesis and image reconstruction, and developing robust variants that are less susceptible to outliers or noise. These advancements are significantly impacting fields such as computer vision, natural language processing, and healthcare through improved data efficiency, enhanced generative capabilities, and more reliable feature extraction for downstream tasks.
Papers
October 23, 2024
October 8, 2024
May 23, 2024
December 17, 2023
December 15, 2023
November 17, 2023
November 10, 2023
September 20, 2023
August 8, 2023
May 4, 2023
March 24, 2023
January 27, 2023
January 26, 2023
January 15, 2023
January 9, 2023
October 12, 2022
October 7, 2022
September 22, 2022
August 8, 2022
July 20, 2022