Instruction Tuning
Instruction tuning refines large language models (LLMs) by training them on datasets of instructions and desired responses, improving their ability to follow diverse commands and generate helpful outputs. Current research emphasizes improving data quality and diversity through techniques like data partitioning, synthetic data generation, and novel prompting strategies, often applied to various model architectures including LLMs and multimodal models. This area is significant because it directly addresses the limitations of pre-trained LLMs, leading to safer, more reliable, and more useful AI systems across numerous applications, from chatbots to specialized tools for medical diagnosis and remote sensing.
Papers
CodecLM: Aligning Language Models with Tailored Synthetic Data
Zifeng Wang, Chun-Liang Li, Vincent Perot, Long T. Le, Jin Miao, Zizhao Zhang, Chen-Yu Lee, Tomas Pfister
LayoutLLM: Layout Instruction Tuning with Large Language Models for Document Understanding
Chuwei Luo, Yufan Shen, Zhaoqing Zhu, Qi Zheng, Zhi Yu, Cong Yao
Symbolic Prompt Program Search: A Structure-Aware Approach to Efficient Compile-Time Prompt Optimization
Tobias Schnabel, Jennifer Neville
FLawN-T5: An Empirical Examination of Effective Instruction-Tuning Data Mixtures for Legal Reasoning
Joel Niklaus, Lucia Zheng, Arya D. McCarthy, Christopher Hahn, Brian M. Rosen, Peter Henderson, Daniel E. Ho, Garrett Honke, Percy Liang, Christopher Manning