Parameter Efficient Fine Tuning
Parameter-efficient fine-tuning (PEFT) aims to adapt large pre-trained models to specific downstream tasks while minimizing the number of trainable parameters, thus reducing computational costs and memory requirements. Current research focuses on improving the efficiency and effectiveness of PEFT methods, exploring techniques like low-rank matrix and tensor decompositions (e.g., LoRA, its variants, and tensor-based adaptations), selective layer training, and novel parameter initialization strategies. These advancements are significant because they enable the deployment of large language models and other foundation models on resource-constrained devices and facilitate more efficient and sustainable model adaptation for diverse applications.
Papers
Increasing Model Capacity for Free: A Simple Strategy for Parameter Efficient Fine-tuning
Haobo Song, Hao Zhao, Soumajit Majumder, Tao Lin
Embedded Prompt Tuning: Towards Enhanced Calibration of Pretrained Models for Medical Images
Wenqiang Zu, Shenghao Xie, Qing Zhao, Guoqi Li, Lei Ma
SplitLoRA: A Split Parameter-Efficient Fine-Tuning Framework for Large Language Models
Zheng Lin, Xuanjie Hu, Yuxin Zhang, Zhe Chen, Zihan Fang, Xianhao Chen, Ang Li, Praneeth Vepakomma, Yue Gao
Bayesian-LoRA: LoRA based Parameter Efficient Fine-Tuning using Optimal Quantization levels and Rank Values trough Differentiable Bayesian Gates
Cristian Meo, Ksenia Sycheva, Anirudh Goyal, Justin Dauwels
Fighting Randomness with Randomness: Mitigating Optimisation Instability of Fine-Tuning using Delayed Ensemble and Noisy Interpolation
Branislav Pecher, Jan Cegin, Robert Belanec, Jakub Simko, Ivan Srba, Maria Bielikova