Autoregressive Language Model
Autoregressive language models (ALMs) are a class of neural networks designed to generate sequential data, primarily text, by predicting the next element in a sequence based on preceding elements. Current research focuses on improving ALM efficiency through techniques like speculative decoding and blockwise parallel decoding, as well as enhancing their capabilities by incorporating visual information and addressing limitations in long-sequence modeling and knowledge distillation. These advancements are significant because they improve the speed and quality of text generation, impacting various applications from machine translation and text-to-speech synthesis to more complex tasks like scene reconstruction and e-commerce applications.
Papers
July 20, 2023
June 30, 2023
June 23, 2023
May 26, 2023
May 22, 2023
May 19, 2023
May 17, 2023
May 15, 2023
April 17, 2023
April 13, 2023
April 7, 2023
April 4, 2023
February 18, 2023
December 5, 2022
December 3, 2022
November 17, 2022
October 31, 2022
October 25, 2022