Instruction Tuning
Instruction tuning refines large language models (LLMs) by training them on datasets of instructions and desired responses, improving their ability to follow diverse commands and generate helpful outputs. Current research emphasizes improving data quality and diversity through techniques like data partitioning, synthetic data generation, and novel prompting strategies, often applied to various model architectures including LLMs and multimodal models. This area is significant because it directly addresses the limitations of pre-trained LLMs, leading to safer, more reliable, and more useful AI systems across numerous applications, from chatbots to specialized tools for medical diagnosis and remote sensing.
Papers
Camels in a Changing Climate: Enhancing LM Adaptation with Tulu 2
Hamish Ivison, Yizhong Wang, Valentina Pyatkin, Nathan Lambert, Matthew Peters, Pradeep Dasigi, Joel Jang, David Wadden, Noah A. Smith, Iz Beltagy, Hannaneh Hajishirzi
Exploring the Relationship between In-Context Learning and Instruction Tuning
Hanyu Duan, Yixuan Tang, Yi Yang, Ahmed Abbasi, Kar Yan Tam
TaCo: Enhancing Cross-Lingual Transfer for Low-Resource Languages in LLMs through Translation-Assisted Chain-of-Thought Processes
Bibek Upadhayay, Vahid Behzadan
Towards Robust Temporal Reasoning of Large Language Models via a Multi-Hop QA Dataset and Pseudo-Instruction Tuning
Qingyu Tan, Hwee Tou Ng, Lidong Bing
R-Tuning: Instructing Large Language Models to Say `I Don't Know'
Hanning Zhang, Shizhe Diao, Yong Lin, Yi R. Fung, Qing Lian, Xingyao Wang, Yangyi Chen, Heng Ji, Tong Zhang
X-Eval: Generalizable Multi-aspect Text Evaluation via Augmented Instruction Tuning with Auxiliary Evaluation Aspects
Minqian Liu, Ying Shen, Zhiyang Xu, Yixin Cao, Eunah Cho, Vaibhav Kumar, Reza Ghanadan, Lifu Huang
PLUG: Leveraging Pivot Language in Cross-Lingual Instruction Tuning
Zhihan Zhang, Dong-Ho Lee, Yuwei Fang, Wenhao Yu, Mengzhao Jia, Meng Jiang, Francesco Barbieri