Instruction Tuning
Instruction tuning refines large language models (LLMs) by training them on datasets of instructions and desired responses, improving their ability to follow diverse commands and generate helpful outputs. Current research emphasizes improving data quality and diversity through techniques like data partitioning, synthetic data generation, and novel prompting strategies, often applied to various model architectures including LLMs and multimodal models. This area is significant because it directly addresses the limitations of pre-trained LLMs, leading to safer, more reliable, and more useful AI systems across numerous applications, from chatbots to specialized tools for medical diagnosis and remote sensing.
Papers
TAIA: Large Language Models are Out-of-Distribution Data Learners
Shuyang Jiang, Yusheng Liao, Ya Zhang, Yanfeng Wang, Yu Wang
From Symbolic Tasks to Code Generation: Diversification Yields Better Task Performers
Dylan Zhang, Justin Wang, Francois Charton
X-Instruction: Aligning Language Model in Low-resource Languages with Self-curated Cross-lingual Instructions
Chong Li, Wen Yang, Jiajun Zhang, Jinliang Lu, Shaonan Wang, Chengqing Zong
Instruct-MusicGen: Unlocking Text-to-Music Editing for Music Language Models via Instruction Tuning
Yixiao Zhang, Yukara Ikemiya, Woosung Choi, Naoki Murata, Marco A. Martínez-Ramírez, Liwei Lin, Gus Xia, Wei-Hsiang Liao, Yuki Mitsufuji, Simon Dixon
Instruction Tuning with Retrieval-based Examples Ranking for Aspect-based Sentiment Analysis
Guangmin Zheng, Jin Wang, Liang-Chih Yu, Xuejie Zhang
Distilling Instruction-following Abilities of Large Language Models with Task-aware Curriculum Planning
Yuanhao Yue, Chengyu Wang, Jun Huang, Peng Wang
Disperse-Then-Merge: Pushing the Limits of Instruction Tuning via Alignment Tax Reduction
Tingchen Fu, Deng Cai, Lemao Liu, Shuming Shi, Rui Yan
Mosaic-IT: Free Compositional Data Augmentation Improves Instruction Tuning
Ming Li, Pei Chen, Chenguang Wang, Hongyu Zhao, Yijun Liang, Yupeng Hou, Fuxiao Liu, Tianyi Zhou