Fine Tuning
Fine-tuning adapts pre-trained large language models (LLMs) to specific tasks, improving performance and efficiency compared to training from scratch. Current research emphasizes efficient fine-tuning methods like low-rank adaptation (LoRA) and techniques addressing challenges such as catastrophic forgetting and calibration issues, often employing bilevel optimization or adaptive noise allocation for improved performance and privacy. This work is significant because it enables the deployment of powerful LLMs across diverse applications, from medical diagnosis to visual editing, while mitigating resource constraints and privacy concerns.
Papers
F$^3$OCUS -- Federated Finetuning of Vision-Language Foundation Models with Optimal Client Layer Updating Strategy via Multi-objective Meta-Heuristics
Pramit Saha, Felix Wagner, Divyanshu Mishra, Can Peng, Anshul Thakur, David Clifton, Konstantinos Kamnitsas, J. Alison Noble
Learn from Downstream and Be Yourself in Multimodal Large Language Model Fine-Tuning
Wenke Huang, Jian Liang, Zekun Shi, Didi Zhu, Guancheng Wan, He Li, Bo Du, Dacheng Tao, Mang Ye
PERFT: Parameter-Efficient Routed Fine-Tuning for Mixture-of-Expert Model
Yilun Liu, Yunpu Ma, Shuo Chen, Zifeng Ding, Bailan He, Zhen Han, Volker Tresp
Efficient Federated Finetuning of Tiny Transformers with Resource-Constrained Devices
Kilian Pfeiffer, Mohamed Aboelenien Ahmed, Ramin Khalili, Jörg Henkel
Maximizing domain generalization in fetal brain tissue segmentation: the role of synthetic data generation, intensity clustering and real image fine-tuning
Vladyslav Zalevskyi, Thomas Sanchez, Margaux Roulet, Hélène Lajous, Jordina Aviles Verdera, Jana Hutter, Hamza Kebiri, Meritxell Bach Cuadra
Model Fusion through Bayesian Optimization in Language Model Fine-Tuning
Chaeyun Jang, Hyungi Lee, Jungtaek Kim, Juho Lee
Dialectal Coverage And Generalization in Arabic Speech Recognition
Amirbek Djanibekov, Hawau Olamide Toyin, Raghad Alshalan, Abdullah Alitr, Hanan Aldarmaki
CodeLutra: Boosting LLM Code Generation via Preference-Guided Refinement
Leitian Tao, Xiang Chen, Tong Yu, Tung Mai, Ryan Rossi, Yixuan Li, Saayan Mitra
Multistage Fine-tuning Strategies for Automatic Speech Recognition in Low-resource Languages
Leena G Pillai, Kavya Manohar, Basil K Raju, Elizabeth Sherly
DELIFT: Data Efficient Language model Instruction Fine Tuning
Ishika Agarwal, Krishna Killamsetty, Lucian Popa, Marina Danilevksy
Robust and Efficient Fine-tuning of LLMs with Bayesian Reparameterization of Low-Rank Adaptation
Vaibhav Seth, Arinjay Pathak, Ayan Sengupta, Natraj Raman, Sriram Gopalakrishnan, Tanmoy Chakraborty
A Comparative Analysis of Instruction Fine-Tuning LLMs for Financial Text Classification
Sorouralsadat Fatemi, Yuheng Hu, Maryam Mousavi
Detect an Object At Once without Fine-tuning
Junyu Hao, Jianheng Liu, Yongjia Zhao, Zuofan Chen, Qi Sun, Jinlong Chen, Jianguo Wei, Minghao Yang
Parameter-Efficient Fine-Tuning of Large Language Models for Unit Test Generation: An Empirical Study
André Storhaug, Jingyue Li
Towards Pedagogical LLMs with Supervised Fine Tuning for Computing Education
Alexandra Vassar, Jake Renzella, Emily Ross, Andrew Taylor