Efficient Transfer Learning
Efficient transfer learning (ETL) aims to adapt large pre-trained models to new tasks using minimal computational resources and data, focusing on reducing both the number of updated parameters and the overall training time. Current research emphasizes parameter-efficient techniques like prompt tuning, adapters, and various forms of side networks, often applied to vision-language models, transformers, and other deep learning architectures. These advancements are crucial for deploying large models in resource-constrained environments and accelerating the development of AI solutions across diverse fields, including medical imaging, industrial signal processing, and natural language processing. The ultimate goal is to achieve comparable or even superior performance to full fine-tuning with significantly reduced computational cost.
Papers
Resource-Efficient Transfer Learning From Speech Foundation Model Using Hierarchical Feature Fusion
Zhouyuan Huo, Khe Chai Sim, Bo Li, Dongseong Hwang, Tara N. Sainath, Trevor Strohman
Integrated Parameter-Efficient Tuning for General-Purpose Audio Models
Ju-ho Kim, Jungwoo Heo, Hyun-seo Shin, Chan-yeong Lim, Ha-Jin Yu