Parameter Efficient Fine Tuning
Parameter-efficient fine-tuning (PEFT) aims to adapt large pre-trained models to specific downstream tasks while minimizing the number of trainable parameters, thus reducing computational costs and memory requirements. Current research focuses on improving the efficiency and effectiveness of PEFT methods, exploring techniques like low-rank matrix and tensor decompositions (e.g., LoRA, its variants, and tensor-based adaptations), selective layer training, and novel parameter initialization strategies. These advancements are significant because they enable the deployment of large language models and other foundation models on resource-constrained devices and facilitate more efficient and sustainable model adaptation for diverse applications.
Papers
LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models
Yaowei Zheng, Richong Zhang, Junhao Zhang, Yanhan Ye, Zheyan Luo, Zhangchi Feng, Yongqiang Ma
AdaViPro: Region-based Adaptive Visual Prompt for Large-Scale Models Adapting
Mengyu Yang, Ye Tian, Lanshan Zhang, Xiao Liang, Xuming Ran, Wendong Wang
AFLoRA: Adaptive Freezing of Low Rank Adaptation in Parameter Efficient Fine-Tuning of Large Models
Zeyu Liu, Souvik Kundu, Anni Li, Junrui Wan, Lianghao Jiang, Peter Anthony Beerel
Dynamic Tuning Towards Parameter and Inference Efficiency for ViT Adaptation
Wangbo Zhao, Jiasheng Tang, Yizeng Han, Yibing Song, Kai Wang, Gao Huang, Fan Wang, Yang You
Let's Focus on Neuron: Neuron-Level Supervised Fine-tuning for Large Language Model
Haoyun Xu, Runzhe Zhan, Derek F. Wong, Lidia S. Chao
Introducing Routing Functions to Vision-Language Parameter-Efficient Fine-Tuning with Low-Rank Bottlenecks
Tingyu Qu, Tinne Tuytelaars, Marie-Francine Moens
PYRA: Parallel Yielding Re-Activation for Training-Inference Efficient Task Adaptation
Yizhe Xiong, Hui Chen, Tianxiang Hao, Zijia Lin, Jungong Han, Yuesong Zhang, Guoxin Wang, Yongjun Bao, Guiguang Ding
Targeted Efficient Fine-tuning: Optimizing Parameter Updates with Data-Driven Sample Selection
Ming Dong, Kang Xue, Bolong Zheng, Tingting He
Unleashing the Power of Meta-tuning for Few-shot Generalization Through Sparse Interpolated Experts
Shengzhuang Chen, Jihoon Tack, Yunqiao Yang, Yee Whye Teh, Jonathan Richard Schwarz, Ying Wei