Parameter Efficient
Parameter-efficient fine-tuning (PEFT) methods aim to adapt large pre-trained models to new tasks using minimal additional parameters, addressing the computational and memory constraints of full fine-tuning. Current research focuses on developing novel PEFT algorithms, such as LoRA and adapter methods, and applying them to various model architectures including transformers and convolutional neural networks across diverse domains like natural language processing, computer vision, and medical image analysis. This research is significant because it enables the deployment of powerful models on resource-limited devices and accelerates the training process, ultimately broadening the accessibility and applicability of advanced machine learning techniques.
Papers
Towards Full-parameter and Parameter-efficient Self-learning For Endoscopic Camera Depth Estimation
Shuting Zhao, Chenkang Du, Kristin Qi, Xinrong Chen, Xinhan Di
MoS: Unleashing Parameter Efficiency of Low-Rank Adaptation with Mixture of Shards
Sheng Wang, Liheng Chen, Pengan Chen, Jingwei Dong, Boyang Xue, Jiyue Jiang, Lingpeng Kong, Chuan Wu