Parameter Efficient
Parameter-efficient fine-tuning (PEFT) methods aim to adapt large pre-trained models to new tasks using minimal additional parameters, addressing the computational and memory constraints of full fine-tuning. Current research focuses on developing novel PEFT algorithms, such as LoRA and adapter methods, and applying them to various model architectures including transformers and convolutional neural networks across diverse domains like natural language processing, computer vision, and medical image analysis. This research is significant because it enables the deployment of powerful models on resource-limited devices and accelerates the training process, ultimately broadening the accessibility and applicability of advanced machine learning techniques.
Papers
Parameter-Efficient Learning for Text-to-Speech Accent Adaptation
Li-Jen Yang, Chao-Han Huck Yang, Jen-Tzung Chien
A Parameter-Efficient Learning Approach to Arabic Dialect Identification with Pre-Trained General-Purpose Speech Model
Srijith Radhakrishnan, Chao-Han Huck Yang, Sumeer Ahmad Khan, Narsis A. Kiani, David Gomez-Cabrero, Jesper N. Tegner