Autoregressive LLM
Autoregressive Large Language Models (LLMs) are a class of generative models that predict sequential data, like text or images, one element at a time. Current research focuses on improving their efficiency (e.g., through adaptive skipping of computational layers), enhancing alignment with human objectives via representation editing, and exploring novel input representations such as compressed image formats. These advancements aim to address limitations in computational cost, reliability, and the ability to handle diverse data types, ultimately impacting various applications from text generation and question answering to realistic data synthesis.
Papers
October 19, 2024
October 14, 2024
September 9, 2024
August 15, 2024
June 10, 2024
April 5, 2024
February 7, 2024
January 6, 2024
November 28, 2023
July 3, 2023
October 12, 2022