Decoder Only Large Language Model
Decoder-only large language models (LLMs) are a class of AI models designed for text generation, focusing on improving efficiency and leveraging pre-trained knowledge for various downstream tasks. Current research emphasizes efficient decoding methods, parameter-efficient fine-tuning techniques, and adapting these models for diverse applications like machine translation, speech-to-text translation, and knowledge graph construction, often involving techniques like prompt engineering and contrastive learning. This area is significant due to the potential for improved performance and reduced computational costs compared to encoder-decoder or encoder-only architectures, leading to broader accessibility and applicability in numerous fields.
Papers
November 5, 2024
October 21, 2024
October 12, 2024
October 9, 2024
September 5, 2024
August 21, 2024
July 15, 2024
July 3, 2024
July 2, 2024
June 27, 2024
May 27, 2024
May 26, 2024
May 23, 2024
April 13, 2024
April 9, 2024
March 8, 2024
March 1, 2024
February 26, 2024
February 23, 2024