Supervised Fine Tuning
Supervised fine-tuning (SFT) adapts pre-trained large language models (LLMs) to specific tasks by training them on labeled data, aiming to improve performance and alignment with human preferences. Current research focuses on optimizing SFT methods, including exploring alternative loss functions (e.g., beyond cross-entropy), developing techniques to mitigate training imbalances and overfitting, and investigating the interplay between SFT and reinforcement learning. These advancements are significant because they enhance the efficiency and effectiveness of adapting LLMs for diverse applications, ranging from question answering and code generation to specialized domains like biomedicine and legal text processing.
Papers
Balancing Enhancement, Harmlessness, and General Capabilities: Enhancing Conversational LLMs with Direct RLHF
Chen Zheng, Ke Sun, Hang Wu, Chenguang Xi, Xun Zhou
Masked Thought: Simply Masking Partial Reasoning Steps Can Improve Mathematical Reasoning Learning of Language Models
Changyu Chen, Xiting Wang, Ting-En Lin, Ang Lv, Yuchuan Wu, Xin Gao, Ji-Rong Wen, Rui Yan, Yongbin Li
Analyzing and Adapting Large Language Models for Few-Shot Multilingual NLU: Are We There Yet?
Evgeniia Razumovskaia, Ivan Vulić, Anna Korhonen