Instruction Following Model
Instruction-following models aim to create large language models (LLMs) that reliably and accurately complete tasks specified by user instructions. Current research emphasizes improving these models through techniques like optimizing prompts (both instructions and exemplars), efficiently training models on diverse datasets (e.g., using distribution editing), and developing robust evaluation metrics that go beyond simple accuracy. This work is significant because it addresses the critical need for reliable and controllable LLMs, impacting various fields from scientific literature analysis to graphic design and code generation.
Papers
June 22, 2024
June 21, 2024
June 17, 2024
June 10, 2024
June 3, 2024
April 23, 2024
March 19, 2024
February 20, 2024
January 1, 2024
November 27, 2023
October 11, 2023
August 31, 2023
August 24, 2023
August 12, 2023
July 31, 2023
July 17, 2023
March 28, 2023
October 6, 2022