Instruction Tuning
Instruction tuning refines large language models (LLMs) by training them on datasets of instructions and desired responses, improving their ability to follow diverse commands and generate helpful outputs. Current research emphasizes improving data quality and diversity through techniques like data partitioning, synthetic data generation, and novel prompting strategies, often applied to various model architectures including LLMs and multimodal models. This area is significant because it directly addresses the limitations of pre-trained LLMs, leading to safer, more reliable, and more useful AI systems across numerous applications, from chatbots to specialized tools for medical diagnosis and remote sensing.
Papers
Separable Mixture of Low-Rank Adaptation for Continual Visual Instruction Tuning
Ziqi Wang, Chang Che, Qi Wang, Yangyang Li, Zenglin Shi, Meng Wang
InstCache: A Predictive Cache for LLM Serving
Longwei Zou, Tingfeng Liu, Kai Chen, Jiangang Kong, Yangdong Deng
Star-Agents: Automatic Data Optimization with LLM Agents for Instruction Tuning
Hang Zhou, Yehui Tang, Haochen Qin, Yujie Yang, Renren Jin, Deyi Xiong, Kai Han, Yunhe Wang
mlan: language-based instruction tuning improves zero-shot generalization of multimodal large language models
Jianhong Tu, Zhuohao Ni, Nicholas Crispino, Zihao Yu, Michael Bendersky, Beliz Gunel, Ruoxi Jia, Xin Liu, Lingjuan Lyu, Dawn Song, Chenguang Wang
Orca: Enhancing Role-Playing Abilities of Large Language Models by Integrating Personality Traits
Yuxuan Huang
SelfCodeAlign: Self-Alignment for Code Generation
Yuxiang Wei, Federico Cassano, Jiawei Liu, Yifeng Ding, Naman Jain, Zachary Mueller, Harm de Vries, Leandro von Werra, Arjun Guha, Lingming Zhang
Instruction-Tuning Llama-3-8B Excels in City-Scale Mobility Prediction
Peizhi Tang, Chuang Yang, Tong Xing, Xiaohang Xu, Renhe Jiang, Kaoru Sezaki