Downstream Recognition
Downstream recognition focuses on adapting pre-trained large language and vision-language models to specific tasks, improving their performance and efficiency. Current research emphasizes developing parameter-efficient transfer learning methods, such as prompt tuning and dynamic visual prompt tuning, which leverage small sets of trainable parameters to achieve high accuracy on diverse downstream tasks while minimizing computational costs. These advancements are significant because they enable the application of powerful pre-trained models to a wider range of applications, including image classification, object detection, and text recognition, while addressing issues like spurious correlations and biases learned during pre-training.
Papers
June 3, 2024
December 18, 2023
September 12, 2023
April 6, 2023
June 21, 2022
May 6, 2022
March 23, 2022