Autoregressive LLM

Autoregressive Large Language Models (LLMs) are a class of generative models that predict sequential data, like text or images, one element at a time. Current research focuses on improving their efficiency (e.g., through adaptive skipping of computational layers), enhancing alignment with human objectives via representation editing, and exploring novel input representations such as compressed image formats. These advancements aim to address limitations in computational cost, reliability, and the ability to handle diverse data types, ultimately impacting various applications from text generation and question answering to realistic data synthesis.

Papers