Instruction Tuned Language Model
Instruction-tuned language models (LLMs) are large language models trained to follow instructions, improving their ability to perform diverse tasks without extensive task-specific training. Current research focuses on enhancing their performance across various modalities (e.g., speech), improving robustness to instruction variations, mitigating safety risks like backdoor attacks and biases, and developing more efficient evaluation metrics. This area is significant because it advances the capabilities of LLMs for real-world applications and provides valuable insights into model behavior, prompting strategies, and the broader challenges of aligning AI systems with human intentions.
Papers
November 7, 2024
September 23, 2024
August 7, 2024
July 15, 2024
June 24, 2024
May 2, 2024
April 29, 2024
February 27, 2024
February 22, 2024
December 18, 2023
November 1, 2023
September 11, 2023
August 11, 2023
August 1, 2023
July 19, 2023
July 13, 2023
June 20, 2023
June 7, 2023
June 1, 2023