Visual Question Answering
Visual Question Answering (VQA) aims to enable computers to answer questions about images, requiring sophisticated integration of visual and linguistic understanding. Current research emphasizes improving model robustness and reliability, focusing on addressing issues like inconsistencies in responses, hallucinations, and the handling of unanswerable questions, often using large multimodal language models (MLLMs) like BLIP-2 and LLaVA. This field is crucial for advancing AI's ability to interact with the world in a more human-like way, with applications ranging from assistive technologies for visually impaired individuals to medical image analysis and automated data visualization evaluation.
Papers
August 24, 2022
August 19, 2022
August 10, 2022
August 1, 2022
July 27, 2022
July 25, 2022
July 24, 2022
July 21, 2022
July 6, 2022
July 5, 2022
June 30, 2022
June 29, 2022
June 27, 2022
June 25, 2022
June 22, 2022
June 10, 2022
June 7, 2022