Vector Quantized Variational Autoencoder
Vector-quantized variational autoencoders (VQ-VAEs) are generative models that learn discrete representations of data, aiming to achieve efficient compression and high-quality reconstruction or generation. Current research focuses on improving VQ-VAE architectures, such as incorporating decomposed structures, hierarchical refinements, and physics-informed constraints, to enhance performance in diverse applications. These advancements are impacting fields ranging from robotics and speech processing to protein structure prediction and wireless communication, enabling improved data efficiency, generation of realistic data, and more effective anomaly detection.
Papers
February 2, 2023
December 8, 2022
December 4, 2022
November 25, 2022
May 16, 2022
March 7, 2022
February 4, 2022
January 10, 2022
November 10, 2021