Black Box Tuning
Black-box tuning focuses on adapting large, proprietary language and vision-language models without access to their internal parameters or gradients, primarily aiming to optimize model performance for specific tasks through techniques like prompt engineering and output feature adaptation. Current research emphasizes developing efficient derivative-free optimization algorithms, including methods inspired by gradient descent and those employing proxy models or subspace optimization, to achieve performance comparable to full model fine-tuning. This area is significant because it enables the practical application of powerful, closed-source models while mitigating security risks associated with full model access, opening up new possibilities for research and deployment in various fields.