Instruction Tuned Large Language Model
Instruction-tuned large language models (LLMs) are designed to improve the ability of LLMs to follow instructions accurately and generate relevant responses, addressing limitations of previous models. Current research focuses on improving instruction-following capabilities through techniques like continual pretraining, model merging, and reinforcement learning from human feedback, often applied to architectures such as Llama and GPT models. This area is significant because it enhances the reliability and safety of LLMs for various applications, including finance, healthcare, and software development, while also raising important questions about bias mitigation and robustness to adversarial attacks.
Papers
September 30, 2024
September 24, 2024
September 3, 2024
August 30, 2024
July 3, 2024
July 2, 2024
June 17, 2024
March 2, 2024
February 26, 2024
February 19, 2024
February 16, 2024
February 15, 2024
January 30, 2024
December 12, 2023
November 16, 2023
November 15, 2023
November 13, 2023
October 26, 2023
October 25, 2023