Instruct Tuned Model
Instruct-tuned models are large language models (LLMs) specifically trained to follow instructions, improving their ability to generate helpful and relevant responses compared to foundation models. Current research focuses on mitigating biases in these models and their evaluators, enhancing their reasoning capabilities through techniques like retrieval-augmented generation (RAG), and improving their performance on tasks involving visual information or code generation by incorporating external knowledge sources and static analysis. This work is significant because it addresses critical limitations of LLMs, leading to more reliable and robust AI systems with broader applications in various fields.
Papers
August 15, 2024
July 9, 2024
April 13, 2024
February 12, 2024
October 12, 2023
September 28, 2023
June 19, 2023