Parameter Efficient Fine Tuning
Parameter-efficient fine-tuning (PEFT) aims to adapt large pre-trained models to specific downstream tasks while minimizing the number of trainable parameters, thus reducing computational costs and memory requirements. Current research focuses on improving the efficiency and effectiveness of PEFT methods, exploring techniques like low-rank matrix and tensor decompositions (e.g., LoRA, its variants, and tensor-based adaptations), selective layer training, and novel parameter initialization strategies. These advancements are significant because they enable the deployment of large language models and other foundation models on resource-constrained devices and facilitate more efficient and sustainable model adaptation for diverse applications.
Papers
Is Multiple Object Tracking a Matter of Specialization?
Gianluca Mancusi, Mattia Bernardi, Aniello Panariello, Angelo Porrello, Rita Cucchiara, Simone Calderara
C2A: Client-Customized Adaptation for Parameter-Efficient Federated Learning
Yeachan Kim, Junho Kim, Wing-Lam Mok, Jun-Hyung Park, SangKeun Lee
Meta-Learning Adaptable Foundation Models
Jacob L. Block, Sundararajan Srinivasan, Liam Collins, Aryan Mokhtari, Sanjay Shakkottai
Capacity Control is an Effective Memorization Mitigation Mechanism in Text-Conditional Diffusion Models
Raman Dutt, Pedro Sanchez, Ondrej Bohdal, Sotirios A. Tsaftaris, Timothy Hospedales
Preserving Pre-trained Representation Space: On Effectiveness of Prefix-tuning for Large Multi-modal Models
Donghoon Kim, Gusang Lee, Kyuhong Shim, Byonghyo Shim
IntLoRA: Integral Low-rank Adaptation of Quantized Diffusion Models
Hang Guo, Yawei Li, Tao Dai, Shu-Tao Xia, Luca Benini
Parameter-Efficient Fine-Tuning in Large Models: A Survey of Methodologies
Luping Wang, Sheng Chen, Linnan Jiang, Shu Pan, Runze Cai, Sen Yang, Fei Yang
GeoLoRA: Geometric integration for parameter efficient fine-tuning
Steffen Schotthöfer, Emanuele Zangrando, Gianluca Ceruti, Francesco Tudisco, Jonas Kusch