Adaptive Token
Adaptive tokenization techniques aim to optimize the efficiency and performance of transformer-based models, particularly in vision and language processing, by dynamically adjusting the number or length of input tokens processed. Current research focuses on developing algorithms and model architectures that selectively retain informative tokens while discarding redundant ones, often leveraging attention mechanisms or psycholinguistic principles to guide this selection process. These advancements promise to improve the speed and resource efficiency of large language and vision-language models, making them more practical for real-world applications with limited computational resources.
Papers
October 6, 2024
September 25, 2024
September 1, 2024
August 22, 2024
July 16, 2024
July 7, 2024
July 2, 2024
June 18, 2024
May 24, 2024
April 23, 2024
March 19, 2024
October 4, 2023
September 29, 2023
August 31, 2023
August 24, 2023
July 5, 2023
June 29, 2023
June 23, 2023
May 24, 2023