Autoregressive Language Model
Autoregressive language models (ALMs) are a class of neural networks designed to generate sequential data, primarily text, by predicting the next element in a sequence based on preceding elements. Current research focuses on improving ALM efficiency through techniques like speculative decoding and blockwise parallel decoding, as well as enhancing their capabilities by incorporating visual information and addressing limitations in long-sequence modeling and knowledge distillation. These advancements are significant because they improve the speed and quality of text generation, impacting various applications from machine translation and text-to-speech synthesis to more complex tasks like scene reconstruction and e-commerce applications.
Papers
November 12, 2024
November 7, 2024
October 28, 2024
October 18, 2024
October 17, 2024
October 15, 2024
October 3, 2024
September 19, 2024
September 17, 2024
August 30, 2024
August 21, 2024
July 22, 2024
June 25, 2024
June 17, 2024
June 9, 2024
June 6, 2024
June 4, 2024
May 31, 2024
May 18, 2024