Low Rank Adaptation
Low-rank adaptation (LoRA) is a parameter-efficient fine-tuning technique for large pre-trained models, aiming to reduce computational costs and memory requirements while maintaining performance on downstream tasks. Current research focuses on improving LoRA's efficiency and effectiveness through methods like tensor decomposition, adaptive parameter allocation, and novel aggregation strategies for federated learning scenarios, often applied to transformer-based language and vision models. This approach holds significant promise for making large model fine-tuning more accessible and enabling the development of personalized and specialized models across diverse applications with limited resources.
Papers
Block-wise LoRA: Revisiting Fine-grained LoRA for Effective Personalization and Stylization in Text-to-Image Generation
Likun Li, Haoqi Zeng, Changpeng Yang, Haozhe Jia, Di Xu
Matrix-Transformation Based Low-Rank Adaptation (MTLoRA): A Brain-Inspired Method for Parameter-Efficient Fine-Tuning
Yao Liang, Yuwei Wang, Yang Li, Yi Zeng
LoRA+: Efficient Low Rank Adaptation of Large Models
Soufiane Hayou, Nikhil Ghosh, Bin Yu
Uncertainty quantification in fine-tuned LLMs using LoRA ensembles
Oleksandr Balabanov, Hampus Linander
Privacy-Preserving Low-Rank Adaptation against Membership Inference Attacks for Latent Diffusion Models
Zihao Luo, Xilie Xu, Feng Liu, Yun Sing Koh, Di Wang, Jingfeng Zhang