Low Rank Adaptation
Low-rank adaptation (LoRA) is a parameter-efficient fine-tuning technique for large pre-trained models, aiming to reduce computational costs and memory requirements while maintaining performance on downstream tasks. Current research focuses on improving LoRA's efficiency and effectiveness through methods like tensor decomposition, adaptive parameter allocation, and novel aggregation strategies for federated learning scenarios, often applied to transformer-based language and vision models. This approach holds significant promise for making large model fine-tuning more accessible and enabling the development of personalized and specialized models across diverse applications with limited resources.
105papers
Papers - Page 6
March 30, 2024
March 29, 2024
March 22, 2024
March 12, 2024
Block-wise LoRA: Revisiting Fine-grained LoRA for Effective Personalization and Stylization in Text-to-Image Generation
Likun Li, Haoqi Zeng, Changpeng Yang, Haozhe Jia, Di XuMatrix-Transformation Based Low-Rank Adaptation (MTLoRA): A Brain-Inspired Method for Parameter-Efficient Fine-Tuning
Yao Liang, Yuwei Wang, Yang Li, Yi Zeng
February 24, 2024
February 19, 2024
LoRA+: Efficient Low Rank Adaptation of Large Models
Soufiane Hayou, Nikhil Ghosh, Bin YuUncertainty quantification in fine-tuned LLMs using LoRA ensembles
Oleksandr Balabanov, Hampus LinanderPrivacy-Preserving Low-Rank Adaptation against Membership Inference Attacks for Latent Diffusion Models
Zihao Luo, Xilie Xu, Feng Liu, Yun Sing Koh, Di Wang, Jingfeng Zhang