Supervised Fine Tuning
Supervised fine-tuning (SFT) adapts pre-trained large language models (LLMs) to specific tasks by training them on labeled data, aiming to improve performance and alignment with human preferences. Current research focuses on optimizing SFT methods, including exploring alternative loss functions (e.g., beyond cross-entropy), developing techniques to mitigate training imbalances and overfitting, and investigating the interplay between SFT and reinforcement learning. These advancements are significant because they enhance the efficiency and effectiveness of adapting LLMs for diverse applications, ranging from question answering and code generation to specialized domains like biomedicine and legal text processing.
Papers
Empirical Insights on Fine-Tuning Large Language Models for Question-Answering
Junjie Ye, Yuming Yang, Qi Zhang, Tao Gui, Xuanjing Huang, Peng Wang, Zhongchao Shi, Jianping Fan
Supervised Fine-Tuning Achieve Rapid Task Adaption Via Alternating Attention Head Activation Patterns
Yang Zhao, Li Du, Xiao Ding, Kai Xiong, Ting Liu, Bing Qin
SmileyLlama: Modifying Large Language Models for Directed Chemical Space Exploration
Joseph M. Cavanagh, Kunyang Sun, Andrew Gritsevskiy, Dorian Bagni, Thomas D. Bannister, Teresa Head-Gordon
From Yes-Men to Truth-Tellers: Addressing Sycophancy in Large Language Models with Pinpoint Tuning
Wei Chen, Zhen Huang, Liang Xie, Binbin Lin, Houqiang Li, Le Lu, Xinmei Tian, Deng Cai, Yonggang Zhang, Wenxiao Wan, Xu Shen, Jieping Ye