Vector Quantized
Vector quantization (VQ) is a technique used to represent continuous data, such as images or text, as discrete tokens, facilitating efficient processing by models like transformers. Current research focuses on improving VQ tokenizers by optimizing compression strategies, balancing semantic information preservation with detail retention, and exploring alternative training objectives to enhance downstream task performance in various applications, including image generation and language modeling. These advancements are driving improvements in model efficiency, speed, and overall quality, particularly for resource-constrained environments and high-resolution image generation.
Papers
November 11, 2024
June 17, 2024
April 4, 2024
March 10, 2024
February 5, 2024
November 28, 2023
March 16, 2023
December 6, 2022
August 12, 2022
November 24, 2021