Instruction Tuning
Instruction tuning refines large language models (LLMs) by training them on datasets of instructions and desired responses, improving their ability to follow diverse commands and generate helpful outputs. Current research emphasizes improving data quality and diversity through techniques like data partitioning, synthetic data generation, and novel prompting strategies, often applied to various model architectures including LLMs and multimodal models. This area is significant because it directly addresses the limitations of pre-trained LLMs, leading to safer, more reliable, and more useful AI systems across numerous applications, from chatbots to specialized tools for medical diagnosis and remote sensing.
Papers
Navigating the Dual Facets: A Comprehensive Evaluation of Sequential Memory Editing in Large Language Models
Zihao Lin, Mohammad Beigi, Hongxuan Li, Yufan Zhou, Yuxiang Zhang, Qifan Wang, Wenpeng Yin, Lifu Huang
Instruction Diversity Drives Generalization To Unseen Tasks
Dylan Zhang, Justin Wang, Francois Charton
Smaller Language Models are capable of selecting Instruction-Tuning Training Data for Larger Language Models
Dheeraj Mekala, Alex Nguyen, Jingbo Shang
Instruction Tuning for Secure Code Generation
Jingxuan He, Mark Vero, Gabriela Krasnopolska, Martin Vechev
DolphCoder: Echo-Locating Code Large Language Models with Diverse and Multi-Objective Instruction Tuning
Yejie Wang, Keqing He, Guanting Dong, Pei Wang, Weihao Zeng, Muxi Diao, Yutao Mou, Mengdi Zhang, Jingang Wang, Xunliang Cai, Weiran Xu