Orthogonal Fine Tuning
Orthogonal fine-tuning is a parameter-efficient approach to adapting large pre-trained models to specific downstream tasks by updating model weights using orthogonal transformations. Research focuses on developing efficient orthogonal parameterizations, such as those based on Givens rotations or butterfly factorizations, to minimize the number of trainable parameters while maintaining model performance and generalization. This technique shows promise in improving the robustness and efficiency of fine-tuning for various model architectures, including vision-language models, large language models, and text-to-image diffusion models, leading to better performance on diverse tasks with reduced computational cost.
Papers
September 23, 2024
July 11, 2024
June 14, 2024
May 24, 2024
April 5, 2024
November 10, 2023