Transferable Visual
Transferable visual representation learning aims to create visual models that generalize well across diverse datasets and tasks, minimizing the need for extensive retraining. Current research emphasizes developing methods to improve the transferability of pre-trained models, such as those based on diffusion models, CLIP, and vision-language models, often employing techniques like adapters, knowledge distillation, and prototype learning to adapt these models to new domains with limited data. This field is significant because it promises more efficient and robust computer vision systems, impacting applications ranging from image classification and object detection to medical imaging and autonomous driving.
Papers
October 29, 2024
July 26, 2024
March 11, 2024
March 10, 2024
January 6, 2024
October 2, 2023
September 3, 2023
August 22, 2023
June 6, 2023
June 5, 2023
May 25, 2023
January 16, 2023
December 15, 2022
April 20, 2022
January 27, 2022
December 30, 2021
December 27, 2021
December 4, 2021
November 11, 2021