Proxy Tuning
Proxy tuning is a technique for adapting large, often proprietary, language or vision-language models without direct access to their internal parameters. Research focuses on training smaller, "proxy" models that can then be used to adjust the output of the larger models, improving performance on specific tasks or datasets. This approach addresses the resource constraints and privacy concerns associated with directly tuning massive models, offering a more efficient and accessible method for customizing their behavior. The effectiveness of different proxy training methods and their applicability across various model architectures and tasks are key areas of ongoing investigation.
Papers
July 1, 2024
January 16, 2024
October 2, 2023