Large Multimodal Model
Large multimodal models (LMMs) integrate vision and language processing capabilities to understand and generate information across multiple modalities. Current research focuses on improving LMM performance in complex tasks like temporal reasoning in videos, fine-grained image understanding, and robust handling of diverse data types, often leveraging architectures based on instruction tuning and contrastive learning. These advancements are significant for various applications, including improved intelligent tutoring systems, advanced robotics, and more accurate medical diagnoses, by enabling more sophisticated analysis and interaction with the world.
Papers
Explaining latent representations of generative models with large multimodal models
Mengdan Zhu, Zhenke Liu, Bo Pan, Abhinav Angirekula, Liang Zhao
2AFC Prompting of Large Multimodal Models for Image Quality Assessment
Hanwei Zhu, Xiangjie Sui, Baoliang Chen, Xuelin Liu, Peilin Chen, Yuming Fang, Shiqi Wang
PathMMU: A Massive Multimodal Expert-Level Benchmark for Understanding and Reasoning in Pathology
Yuxuan Sun, Hao Wu, Chenglu Zhu, Sunyi Zheng, Qizi Chen, Kai Zhang, Yunlong Zhang, Dan Wan, Xiaoxiao Lan, Mengyue Zheng, Jingxiong Li, Xinheng Lyu, Tao Lin, Lin Yang
CognitiveOS: Large Multimodal Model based System to Endow Any Type of Robot with Generative AI
Artem Lykov, Mikhail Konenkov, Koffivi Fidèle Gbagbe, Mikhail Litvinov, Denis Davletshin, Aleksey Fedoseev, Miguel Altamirano Cabrera, Robinroy Peter, Dzmitry Tsetserukou
CMMMU: A Chinese Massive Multi-discipline Multimodal Understanding Benchmark
Ge Zhang, Xinrun Du, Bei Chen, Yiming Liang, Tongxu Luo, Tianyu Zheng, Kang Zhu, Yuyang Cheng, Chunpu Xu, Shuyue Guo, Haoran Zhang, Xingwei Qu, Junjie Wang, Ruibin Yuan, Yizhi Li, Zekun Wang, Yudong Liu, Yu-Hsuan Tsai, Fengji Zhang, Chenghua Lin, Wenhao Huang, Jie Fu
Benchmarking Large Multimodal Models against Common Corruptions
Jiawei Zhang, Tianyu Pang, Chao Du, Yi Ren, Bo Li, Min Lin