Parameter Efficient
Parameter-efficient fine-tuning (PEFT) methods aim to adapt large pre-trained models to new tasks using minimal additional parameters, addressing the computational and memory constraints of full fine-tuning. Current research focuses on developing novel PEFT algorithms, such as LoRA and adapter methods, and applying them to various model architectures including transformers and convolutional neural networks across diverse domains like natural language processing, computer vision, and medical image analysis. This research is significant because it enables the deployment of powerful models on resource-limited devices and accelerates the training process, ultimately broadening the accessibility and applicability of advanced machine learning techniques.
Papers
Advancing Parameter Efficiency in Fine-tuning via Representation Editing
Muling Wu, Wenhao Liu, Xiaohua Wang, Tianlong Li, Changze Lv, Zixuan Ling, Jianhao Zhu, Cenyuan Zhang, Xiaoqing Zheng, Xuanjing Huang
Infusing Hierarchical Guidance into Prompt Tuning: A Parameter-Efficient Framework for Multi-level Implicit Discourse Relation Recognition
Haodong Zhao, Ruifang He, Mengnan Xiao, Jing Xu