Image Captioning
Image captioning aims to automatically generate descriptive text for images, bridging the gap between computer vision and natural language processing. Current research focuses on improving efficiency (e.g., through early exits and knowledge distillation), enhancing performance on fine-grained datasets (e.g., by incorporating object-part details), and developing more robust evaluation metrics (e.g., addressing hallucinations). These advancements are significant for applications ranging from assisting visually impaired individuals to improving image search and retrieval, and are driving innovation in both vision-language models and evaluation methodologies.
Papers
With a Little Help from your own Past: Prototypical Memory Networks for Image Captioning
Manuele Barraco, Sara Sarto, Marcella Cornia, Lorenzo Baraldi, Rita Cucchiara
CgT-GAN: CLIP-guided Text GAN for Image Captioning
Jiarui Yu, Haoran Li, Yanbin Hao, Bin Zhu, Tong Xu, Xiangnan He
Audio Difference Captioning Utilizing Similarity-Discrepancy Disentanglement
Daiki Takeuchi, Yasunori Ohishi, Daisuke Niizumi, Noboru Harada, Kunio Kashino