Image Captioning
Image captioning aims to automatically generate descriptive text for images, bridging the gap between computer vision and natural language processing. Current research focuses on improving efficiency (e.g., through early exits and knowledge distillation), enhancing performance on fine-grained datasets (e.g., by incorporating object-part details), and developing more robust evaluation metrics (e.g., addressing hallucinations). These advancements are significant for applications ranging from assisting visually impaired individuals to improving image search and retrieval, and are driving innovation in both vision-language models and evaluation methodologies.
Papers
LocCa: Visual Pretraining with Location-aware Captioners
Bo Wan, Michael Tschannen, Yongqin Xian, Filip Pavetic, Ibrahim Alabdulmohsin, Xiao Wang, André Susano Pinto, Andreas Steiner, Lucas Beyer, Xiaohua Zhai
Text Data-Centric Image Captioning with Interactive Prompts
Yiyu Wang, Hao Luo, Jungang Xu, Yingfei Sun, Fan Wang
SciCapenter: Supporting Caption Composition for Scientific Figures with Machine-Generated Captions and Ratings
Ting-Yao Hsu, Chieh-Yang Huang, Shih-Hong Huang, Ryan Rossi, Sungchul Kim, Tong Yu, C. Lee Giles, Ting-Hao K. Huang
Semi-Supervised Image Captioning Considering Wasserstein Graph Matching
Yang Yang