Autoregressive Language Model
Autoregressive language models (ALMs) are a class of neural networks designed to generate sequential data, primarily text, by predicting the next element in a sequence based on preceding elements. Current research focuses on improving ALM efficiency through techniques like speculative decoding and blockwise parallel decoding, as well as enhancing their capabilities by incorporating visual information and addressing limitations in long-sequence modeling and knowledge distillation. These advancements are significant because they improve the speed and quality of text generation, impacting various applications from machine translation and text-to-speech synthesis to more complex tasks like scene reconstruction and e-commerce applications.
Papers
October 24, 2022
September 27, 2022
September 8, 2022
August 4, 2022
July 28, 2022
June 18, 2022
June 16, 2022
May 27, 2022
May 5, 2022
April 30, 2022
April 22, 2022
April 15, 2022
April 14, 2022
March 15, 2022
March 11, 2022
March 4, 2022
February 7, 2022
January 24, 2022
December 20, 2021
December 16, 2021