Paper ID: 2310.02528
On the Cognition of Visual Question Answering Models and Human Intelligence: A Comparative Study
Liben Chen, Long Chen, Tian Ellison-Chen, Zhuoyuan Xu
Visual Question Answering (VQA) is a challenging task that requires cross-modal understanding and reasoning of visual image and natural language question. To inspect the association of VQA models to human cognition, we designed a survey to record human thinking process and analyzed VQA models by comparing the outputs and attention maps with those of humans. We found that although the VQA models resemble human cognition in architecture and performs similarly with human on the recognition-level, they still struggle with cognitive inferences. The analysis of human thinking procedure serves to direct future research and introduce more cognitive capacity into modeling features and architectures.
Submitted: Oct 4, 2023