Sequence Compression
Sequence compression aims to reduce the computational cost and memory requirements of processing long sequences of data, such as those found in speech, video, and reinforcement learning. Current research focuses on developing efficient compression techniques, including methods inspired by large language model tokenization (like byte pair encoding) and those that leverage latent representations of continuous-time processes to achieve variable or adaptive compression rates. These advancements are significant because they enable faster and more efficient processing of large datasets, improving the scalability and applicability of various machine learning models and algorithms across diverse domains.
Papers
May 6, 2024
February 16, 2024
February 15, 2024
December 28, 2022
November 4, 2022