Representation Finetuning
Representation finetuning focuses on improving the performance of large language models and other neural networks by selectively adjusting their internal representations, rather than retraining the entire model. Current research explores various methods, including modifying activation vectors directly and employing low-rank linear subspace adjustments, aiming to achieve greater efficiency and better performance compared to full fine-tuning. This approach offers a more parameter-efficient way to adapt models to specific tasks, leading to improved performance in various applications while potentially providing insights into the internal workings of these complex systems.
Papers
October 18, 2024
October 14, 2024
September 11, 2024
September 5, 2024
April 19, 2024
April 4, 2024
October 3, 2023
August 22, 2022