Paper ID: 2406.11301
Enhancing and Assessing Instruction-Following with Fine-Grained Instruction Variants
Jiuding Yang, Weidong Guo, Kaitong Yang, Xiangyang Li, Yu Xu, Di Niu
The effective alignment of Large Language Models (LLMs) with precise instructions is essential for their application in diverse real-world scenarios. Current methods focus on enhancing the diversity and complexity of training and evaluation samples, yet they fall short in accurately assessing LLMs' ability to follow similar instruction variants. We introduce an effective data augmentation technique DeMoRecon that decomposes complex instructions into simpler sub-components, modifies these, and reconstructs them into new variants, thereby preserves the original instruction's context and complexity while introducing variability, which is critical for training and evaluating LLMs' instruction-following precision. Based on DeMoRecon, we developed the FGIV dataset which contains fine-grained instruction variants of 1,773 seed instructions to both fine-tune and evaluate LLMs. Our findings show that LLMs fine-tuned with FGIV will gain significant performance boost on both ours and commonly used instructions-following benchmarks.
Submitted: Jun 17, 2024