Instruction Tuned Language Model
Instruction-tuned language models (LLMs) are large language models trained to follow instructions, improving their ability to perform diverse tasks without extensive task-specific training. Current research focuses on enhancing their performance across various modalities (e.g., speech), improving robustness to instruction variations, mitigating safety risks like backdoor attacks and biases, and developing more efficient evaluation metrics. This area is significant because it advances the capabilities of LLMs for real-world applications and provides valuable insights into model behavior, prompting strategies, and the broader challenges of aligning AI systems with human intentions.
Papers
April 27, 2023
April 24, 2023
March 27, 2023
March 18, 2023
February 17, 2023
September 20, 2022