Single CLIP
Single CLIP, a powerful vision-language model, is being extensively studied to improve its performance and address its limitations in various applications. Current research focuses on mitigating issues like object hallucinations, enhancing its capabilities for specialized domains (e.g., agriculture), and developing robust defenses against adversarial attacks and biases. This work is significant because it explores ways to leverage CLIP's impressive zero-shot capabilities while simultaneously improving its accuracy, reliability, and fairness across diverse downstream tasks, impacting fields ranging from image generation to anomaly detection.
Papers
November 18, 2024
November 15, 2024
November 14, 2024
October 31, 2024
October 30, 2024
October 26, 2024
October 16, 2024
October 15, 2024
October 13, 2024
October 11, 2024
October 4, 2024
October 2, 2024
September 24, 2024
September 23, 2024
September 15, 2024
September 9, 2024
September 3, 2024
August 31, 2024
August 30, 2024