Single CLIP
Single CLIP, a powerful vision-language model, is being extensively studied to improve its performance and address its limitations in various applications. Current research focuses on mitigating issues like object hallucinations, enhancing its capabilities for specialized domains (e.g., agriculture), and developing robust defenses against adversarial attacks and biases. This work is significant because it explores ways to leverage CLIP's impressive zero-shot capabilities while simultaneously improving its accuracy, reliability, and fairness across diverse downstream tasks, impacting fields ranging from image generation to anomaly detection.
Papers
March 15, 2024
March 14, 2024
March 13, 2024
March 7, 2024
February 26, 2024
February 16, 2024
February 14, 2024
February 10, 2024
February 5, 2024
January 30, 2024
January 27, 2024
January 23, 2024
January 22, 2024
January 18, 2024
January 2, 2024
December 21, 2023
December 19, 2023
December 17, 2023