LLM Fine Tuning
Fine-tuning large language models (LLMs) adapts pre-trained models to specific tasks using smaller datasets, improving performance and efficiency compared to training from scratch. Current research emphasizes parameter-efficient methods like LoRA and techniques to mitigate issues such as catastrophic forgetting and training data imbalance, often employing optimization algorithms like DPO and SVRG, and exploring diverse model architectures including Mixture-of-Experts. This area is crucial for deploying LLMs in real-world applications, enabling customization for various domains while addressing resource constraints and safety concerns.
Papers
OwLore: Outlier-weighed Layerwise Sampled Low-Rank Projection for Memory-Efficient LLM Fine-tuning
Pengxiang Li, Lu Yin, Xiaowei Gao, Shiwei Liu
Pipette: Automatic Fine-grained Large Language Model Training Configurator for Real-World Clusters
Jinkyu Yim, Jaeyong Song, Yerim Choi, Jaebeen Lee, Jaewon Jung, Hongsun Jang, Jinho Lee
Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark
Yihua Zhang, Pingzhi Li, Junyuan Hong, Jiaxiang Li, Yimeng Zhang, Wenqing Zheng, Pin-Yu Chen, Jason D. Lee, Wotao Yin, Mingyi Hong, Zhangyang Wang, Sijia Liu, Tianlong Chen
Federated Fine-tuning of Large Language Models under Heterogeneous Tasks and Client Resources
Jiamu Bai, Daoyuan Chen, Bingchen Qian, Liuyi Yao, Yaliang Li