Captioning Evaluation
Evaluating the quality of automatically generated image captions is a crucial but challenging task, aiming to create metrics that accurately reflect human judgment of fluency, detail, and factual accuracy. Current research focuses on developing both reference-free metrics, often leveraging CLIP-based models and contrastive learning, and reference-based metrics that address limitations of existing methods like CIDEr and METEOR, particularly in handling detailed descriptions and visual hallucinations. These advancements are significant because improved evaluation methods are essential for driving progress in image captioning models and their applications in areas like accessibility and visual information retrieval.
Papers
November 11, 2024
October 9, 2024
August 26, 2024
July 26, 2024
May 29, 2024
April 30, 2024
February 28, 2024
December 21, 2023
July 10, 2023
March 21, 2023
June 24, 2022
May 26, 2022
January 25, 2022