LoRA Based
Low-Rank Adaptation (LoRA) is a parameter-efficient fine-tuning technique for large language models (LLMs) and other deep learning architectures, primarily aiming to reduce computational costs and memory requirements while maintaining or improving performance on downstream tasks. Current research focuses on improving LoRA's efficiency through methods like orthogonal initialization (SBoRA), dynamic rank selection (QDyLoRA), and heterogeneous rank allocation (HetLoRA) across distributed settings, as well as addressing challenges such as overfitting (LoRA Dropout) and adversarial attacks. These advancements are significant for deploying LLMs on resource-constrained devices and enabling more efficient and scalable model adaptation across diverse applications, including medical image analysis and federated learning.