Unsupervised Fine Tuning
Unsupervised fine-tuning aims to improve pre-trained models, such as large language models (LLMs) and vision-language models (VLMs like CLIP), using unlabeled data, thereby reducing reliance on expensive and time-consuming annotation. Current research focuses on developing effective strategies for adapting these models to new tasks without labeled examples, exploring techniques like self-supervised learning, prompt engineering, and the integration of retrieval mechanisms (RAG). This area is significant because it promises to enhance the scalability and applicability of powerful pre-trained models across diverse domains, particularly in scenarios with limited labeled data.
Papers
October 21, 2024
December 12, 2023
December 10, 2023
November 16, 2023
August 24, 2023
August 22, 2023
April 29, 2023
March 15, 2023
March 10, 2023
July 29, 2022
June 7, 2022