Paper ID: 2112.13706

Multi-Image Visual Question Answering

Harsh Raj, Janhavi Dadhania, Akhilesh Bhardwaj, Prabuchandran KJ

While a lot of work has been done on developing models to tackle the problem of Visual Question Answering, the ability of these models to relate the question to the image features still remain less explored. We present an empirical study of different feature extraction methods with different loss functions. We propose New dataset for the task of Visual Question Answering with multiple image inputs having only one ground truth, and benchmark our results on them. Our final model utilising Resnet + RCNN image features and Bert embeddings, inspired from stacked attention network gives 39% word accuracy and 99% image accuracy on CLEVER+TinyImagenet dataset.

Submitted: Dec 27, 2021