Autoregressive Language Model
Autoregressive language models (ALMs) are a class of neural networks designed to generate sequential data, primarily text, by predicting the next element in a sequence based on preceding elements. Current research focuses on improving ALM efficiency through techniques like speculative decoding and blockwise parallel decoding, as well as enhancing their capabilities by incorporating visual information and addressing limitations in long-sequence modeling and knowledge distillation. These advancements are significant because they improve the speed and quality of text generation, impacting various applications from machine translation and text-to-speech synthesis to more complex tasks like scene reconstruction and e-commerce applications.
Papers
April 4, 2023
February 18, 2023
December 5, 2022
December 3, 2022
November 17, 2022
October 31, 2022
October 25, 2022
October 24, 2022
September 27, 2022
September 8, 2022
August 4, 2022
July 28, 2022
June 18, 2022
June 16, 2022
May 27, 2022
May 5, 2022
April 30, 2022
April 22, 2022