Vector Quantized Variational
Vector Quantized Variational Autoencoders (VQ-VAEs) are generative models that learn discrete representations of data, offering advantages in data compression, generation, and downstream tasks like clustering and anomaly detection. Current research focuses on improving VQ-VAE architectures, including addressing codebook collapse, enhancing semantic control, and adapting codebook size dynamically for improved efficiency and performance across diverse data modalities. These advancements are impacting various fields, from robotics and motion synthesis to medical imaging and satellite image analysis, by enabling more efficient data handling, improved generative capabilities, and more robust feature extraction for complex tasks.
Papers
October 14, 2024
October 8, 2024
September 24, 2024
September 3, 2024
July 6, 2024
June 25, 2024
June 13, 2024
June 3, 2024
May 31, 2024
May 23, 2024
February 1, 2024
January 29, 2024
October 9, 2023
August 14, 2023
May 19, 2023
February 12, 2023
January 14, 2023
October 7, 2022
July 20, 2022