Pre Trained Vision Language Model
Pre-trained vision-language models (VLMs) integrate visual and textual information, aiming to improve multimodal understanding and enable zero-shot or few-shot learning across diverse tasks. Current research focuses on enhancing VLMs' compositional reasoning, adapting them to specialized domains (e.g., agriculture, healthcare), and improving efficiency through quantization and parameter-efficient fine-tuning techniques like prompt learning and adapter modules. These advancements are significant because they enable more robust and efficient applications of VLMs in various fields, ranging from robotics and medical image analysis to open-vocabulary object detection and long-tailed image classification.
Papers
March 2, 2023
February 28, 2023
February 23, 2023
February 13, 2023
January 16, 2023
January 13, 2023
December 31, 2022
December 13, 2022
November 28, 2022
November 23, 2022
November 17, 2022
November 9, 2022
October 9, 2022
September 10, 2022
September 7, 2022
August 29, 2022
August 16, 2022
August 4, 2022
July 1, 2022
June 22, 2022