Fine Tuning
Fine-tuning adapts pre-trained large language models (LLMs) to specific tasks, improving performance and efficiency compared to training from scratch. Current research emphasizes efficient fine-tuning methods like low-rank adaptation (LoRA) and techniques addressing challenges such as catastrophic forgetting and calibration issues, often employing bilevel optimization or adaptive noise allocation for improved performance and privacy. This work is significant because it enables the deployment of powerful LLMs across diverse applications, from medical diagnosis to visual editing, while mitigating resource constraints and privacy concerns.
Papers
Federated Fine-Tuning of LLMs: Framework Comparison and Research Directions
Na Yan, Yang Su, Yansha Deng, Robert Schober
Navigating the Designs of Privacy-Preserving Fine-tuning for Large Language Models
Shi Haonan, Ouyang Tu, Wang An
RoRA: Efficient Fine-Tuning of LLM with Reliability Optimization for Rank Adaptation
Jun Liu, Zhenglun Kong, Peiyan Dong, Xuan Shen, Pu Zhao, Hao Tang, Geng Yuan, Wei Niu, Wenbin Zhang, Xue Lin, Dong Huang, Yanzhi Wang
Rate-My-LoRA: Efficient and Adaptive Federated Model Tuning for Cardiac MRI Segmentation
Xiaoxiao He, Haizhou Shi, Ligong Han, Chaowei Tan, Bo Liu, Zihao Xu, Meng Ye, Leon Axel, Kang Li, Dimitris Metaxas
The Scaling Law for LoRA Base on Mutual Information Upper Bound
Jing Zhang, Hui Gao, Peng Zhang, Shuzhen Sun, Chang Yang, Yuexian Hou
ADePT: Adaptive Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning
Pengwei Tang, Xiaolin Hu, Yong Liu
From Superficial Patterns to Semantic Understanding: Fine-Tuning Language Models on Contrast Sets
Daniel Petrov
HALO: Hadamard-Assisted Lossless Optimization for Efficient Low-Precision LLM Training and Fine-Tuning
Saleh Ashkboos, Mahdi Nikdan, Soroush Tabesh, Roberto L. Castro, Torsten Hoefler, Dan Alistarh
Efficient Deployment of Large Language Models on Resource-constrained Devices
Zhiwei Yao, Yang Xu, Hongli Xu, Yunming Liao, Zuan Xie
Towards Compatible Fine-tuning for Vision-Language Model Updates
Zhengbo Wang, Jian Liang, Lijun Sheng, Ran He, Zilei Wang, Tieniu Tan
Two Heads Are Better Than One: Averaging along Fine-Tuning to Improve Targeted Transferability
Hui Zeng, Sanshuai Cui, Biwei Chen, Anjie Peng
Metadata-Enhanced Speech Emotion Recognition: Augmented Residual Integration and Co-Attention in Two-Stage Fine-Tuning
Zixiang Wan, Ziyue Qiu, Yiyang Liu, Wei-Qiang Zhang