Parameter Efficient Fine Tuning
Parameter-efficient fine-tuning (PEFT) aims to adapt large pre-trained models to specific downstream tasks while minimizing the number of trainable parameters, thus reducing computational costs and memory requirements. Current research focuses on improving the efficiency and effectiveness of PEFT methods, exploring techniques like low-rank matrix and tensor decompositions (e.g., LoRA, its variants, and tensor-based adaptations), selective layer training, and novel parameter initialization strategies. These advancements are significant because they enable the deployment of large language models and other foundation models on resource-constrained devices and facilitate more efficient and sustainable model adaptation for diverse applications.
Papers
SparseGrad: A Selective Method for Efficient Fine-tuning of MLP Layers
Viktoriia Chekalina, Anna Rudenko, Gleb Mezentsev, Alexander Mikhalev, Alexander Panchenko, Ivan Oseledets
Parameter-Efficient Fine-Tuning via Selective Discrete Cosine Transform
Yixian Shen, Qi Bi, Jia-Hong Huang, Hongyi Zhu, Anuj Pathania
HUT: A More Computation Efficient Fine-Tuning Method With Hadamard Updated Transformation
Geyuan Zhang, Xiaofei Zhou, Chuheng Chen
A Novel Adaptive Fine-Tuning Algorithm for Multimodal Models: Self-Optimizing Classification and Selection of High-Quality Datasets in Remote Sensing
Yi Ren, Tianyi Zhang, Zhixiong Han, Weibin Li, Zhiyang Wang, Wenbo Ji, Chenhao Qin, Chenbin Liang, Licheng Jiao