Autoregressive Language Model
Autoregressive language models (ALMs) are a class of neural networks designed to generate sequential data, primarily text, by predicting the next element in a sequence based on preceding elements. Current research focuses on improving ALM efficiency through techniques like speculative decoding and blockwise parallel decoding, as well as enhancing their capabilities by incorporating visual information and addressing limitations in long-sequence modeling and knowledge distillation. These advancements are significant because they improve the speed and quality of text generation, impacting various applications from machine translation and text-to-speech synthesis to more complex tasks like scene reconstruction and e-commerce applications.
Papers
October 23, 2023
October 9, 2023
October 6, 2023
August 29, 2023
August 18, 2023
August 14, 2023
August 7, 2023
July 20, 2023
June 30, 2023
June 23, 2023
May 26, 2023
May 22, 2023
May 19, 2023
May 17, 2023
May 15, 2023
April 17, 2023
April 13, 2023