Fine Tuning
Fine-tuning adapts pre-trained large language models (LLMs) to specific tasks, improving performance and efficiency compared to training from scratch. Current research emphasizes efficient fine-tuning methods like low-rank adaptation (LoRA) and techniques addressing challenges such as catastrophic forgetting and calibration issues, often employing bilevel optimization or adaptive noise allocation for improved performance and privacy. This work is significant because it enables the deployment of powerful LLMs across diverse applications, from medical diagnosis to visual editing, while mitigating resource constraints and privacy concerns.
Papers
FeTrIL++: Feature Translation for Exemplar-Free Class-Incremental Learning with Hill-Climbing
Eduard Hogea, Adrian Popescu, Darian Onchis, Grégoire Petit
SmallToLarge (S2L): Scalable Data Selection for Fine-tuning Large Language Models by Summarizing Training Trajectories of Small Models
Yu Yang, Siddhartha Mishra, Jeffrey N Chiang, Baharan Mirzasoleiman
On the Generalization Ability of Unsupervised Pretraining
Yuyang Deng, Junyuan Hong, Jiayu Zhou, Mehrdad Mahdavi
QuantTune: Optimizing Model Quantization with Adaptive Outlier-Driven Fine Tuning
Jiun-Man Chen, Yu-Hsuan Chao, Yu-Jie Wang, Ming-Der Shieh, Chih-Chung Hsu, Wei-Fen Lin
Can LLMs' Tuning Methods Work in Medical Multimodal Domain?
Jiawei Chen, Yue Jiang, Dingkang Yang, Mingcheng Li, Jinjie Wei, Ziyun Qian, Lihua Zhang
IndicLLMSuite: A Blueprint for Creating Pre-training and Fine-Tuning Datasets for Indian Languages
Mohammed Safi Ur Rahman Khan, Priyam Mehta, Ananth Sankar, Umashankar Kumaravelan, Sumanth Doddapaneni, Suriyaprasaad G, Varun Balan G, Sparsh Jain, Anoop Kunchukuttan, Pratyush Kumar, Raj Dabre, Mitesh M. Khapra