Vision Language Task
Vision-language tasks aim to bridge the gap between visual and textual information, enabling machines to understand and generate descriptions, answer questions, and perform complex reasoning based on both image and text data. Current research focuses on improving model efficiency and robustness, particularly through innovative pre-training strategies, parameter-efficient fine-tuning methods, and the development of more interpretable architectures like transformers and multimodal large language models (MLLMs). These advancements are significant for applications in assistive technologies, improving the accessibility and usability of AI systems across various domains, and furthering our understanding of multimodal learning.
Papers
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
Reformulating Vision-Language Foundation Models and Datasets Towards Universal Multimodal Assistants
Tianyu Yu, Jinyi Hu, Yuan Yao, Haoye Zhang, Yue Zhao, Chongyi Wang, Shan Wang, Yinxv Pan, Jiao Xue, Dahai Li, Zhiyuan Liu, Hai-Tao Zheng, Maosong Sun