Representation Finetuning

Representation finetuning focuses on improving the performance of large language models and other neural networks by selectively adjusting their internal representations, rather than retraining the entire model. Current research explores various methods, including modifying activation vectors directly and employing low-rank linear subspace adjustments, aiming to achieve greater efficiency and better performance compared to full fine-tuning. This approach offers a more parameter-efficient way to adapt models to specific tasks, leading to improved performance in various applications while potentially providing insights into the internal workings of these complex systems.

Papers