Fine Tuning
Fine-tuning adapts pre-trained large language models (LLMs) to specific tasks, improving performance and efficiency compared to training from scratch. Current research emphasizes efficient fine-tuning methods like low-rank adaptation (LoRA) and techniques addressing challenges such as catastrophic forgetting and calibration issues, often employing bilevel optimization or adaptive noise allocation for improved performance and privacy. This work is significant because it enables the deployment of powerful LLMs across diverse applications, from medical diagnosis to visual editing, while mitigating resource constraints and privacy concerns.
Papers
Steering Large Language Models for Machine Translation with Finetuning and In-Context Learning
Duarte M. Alves, Nuno M. Guerreiro, João Alves, José Pombal, Ricardo Rei, José G. C. de Souza, Pierre Colombo, André F. T. Martins
Interpreting Indirect Answers to Yes-No Questions in Multiple Languages
Zijie Wang, Md Mosharaf Hossain, Shivam Mathur, Terry Cruz Melo, Kadir Bulut Ozler, Keun Hee Park, Jacob Quintero, MohammadHossein Rezaei, Shreya Nupur Shakya, Md Nayem Uddin, Eduardo Blanco
An Emulator for Fine-Tuning Large Language Models using Small Language Models
Eric Mitchell, Rafael Rafailov, Archit Sharma, Chelsea Finn, Christopher D. Manning
Fine-Tuning Generative Models as an Inference Method for Robotic Tasks
Orr Krupnik, Elisei Shafer, Tom Jurgenson, Aviv Tamar
Towards Anytime Fine-tuning: Continually Pre-trained Language Models with Hypernetwork Prompt
Gangwei Jiang, Caigao Jiang, Siqiao Xue, James Y. Zhang, Jun Zhou, Defu Lian, Ying Wei
Empirical study of pretrained multilingual language models for zero-shot cross-lingual knowledge transfer in generation
Nadezhda Chirkova, Sheng Liang, Vassilina Nikoulina
MERTech: Instrument Playing Technique Detection Using Self-Supervised Pretrained Model With Multi-Task Finetuning
Dichucheng Li, Yinghao Ma, Weixing Wei, Qiuqiang Kong, Yulun Wu, Mingjin Che, Fan Xia, Emmanouil Benetos, Wei Li
Don't Fine-Tune, Decode: Syntax Error-Free Tool Use via Constrained Decoding
Kexun Zhang, Hongqiao Chen, Lei Li, William Wang
Sparse Fine-tuning for Inference Acceleration of Large Language Models
Eldar Kurtic, Denis Kuznedelev, Elias Frantar, Michael Goin, Dan Alistarh
FTFT: Efficient and Robust Fine-Tuning by Transferring Training Dynamics
Yupei Du, Albert Gatt, Dong Nguyen
NEFTune: Noisy Embeddings Improve Instruction Finetuning
Neel Jain, Ping-yeh Chiang, Yuxin Wen, John Kirchenbauer, Hong-Min Chu, Gowthami Somepalli, Brian R. Bartoldson, Bhavya Kailkhura, Avi Schwarzschild, Aniruddha Saha, Micah Goldblum, Jonas Geiping, Tom Goldstein
How Abilities in Large Language Models are Affected by Supervised Fine-tuning Data Composition
Guanting Dong, Hongyi Yuan, Keming Lu, Chengpeng Li, Mingfeng Xue, Dayiheng Liu, Wei Wang, Zheng Yuan, Chang Zhou, Jingren Zhou