Orthogonal Parameterization

Orthogonal parameterization is a technique in machine learning that uses orthogonal matrices to efficiently update model parameters, primarily focusing on adapting pre-trained models to new tasks or datasets without extensive retraining. Current research emphasizes applications in various domains, including language models, knowledge graph embeddings, and image generation, employing methods like Householder reflections and butterfly factorizations to improve efficiency and scalability. This approach offers significant advantages in parameter-efficient fine-tuning, enabling the adaptation of large, computationally expensive models to diverse downstream tasks while mitigating issues like bias and privacy leakage.

Papers