Causal Language
Causal language modeling focuses on predicting the next word in a sequence, forming the basis for many large language models (LLMs). Current research emphasizes improving efficiency and knowledge acquisition in these models, exploring techniques like retrieval-based methods, attention mechanism modifications (e.g., masked mixers), and data augmentation strategies to enhance performance and address limitations such as the "reversal curse" and order sensitivity. This field is significant because advancements in causal language modeling directly impact the capabilities of LLMs across diverse applications, from text generation and translation to question answering and specialized domain expertise.
Papers
October 25, 2024
October 23, 2024
October 3, 2024
October 2, 2024
September 26, 2024
September 20, 2024
September 16, 2024
September 5, 2024
September 2, 2024
August 30, 2024
August 26, 2024
August 21, 2024
August 20, 2024
August 9, 2024
June 7, 2024
June 4, 2024
June 3, 2024
May 28, 2024
May 21, 2024