Yes No Question
Research on question answering (QA) focuses on enabling computer systems to accurately and comprehensively respond to diverse question types, moving beyond simple keyword matching to nuanced understanding of context and intent. Current efforts concentrate on improving the robustness of large language models (LLMs) and retrieval-augmented generation (RAG) systems, particularly addressing challenges like ambiguity, hallucination, and the handling of complex, multi-hop reasoning across various data sources (text, tables, knowledge graphs, and even audio). This work is significant for advancing natural language processing and holds substantial implications for applications ranging from improved search engines and chatbots to automated report generation in specialized domains like healthcare and finance.
Papers
MFBE: Leveraging Multi-Field Information of FAQs for Efficient Dense Retrieval
Debopriyo Banerjee, Mausam Jain, Ashish Kulkarni
Can Pre-trained Vision and Language Models Answer Visual Information-Seeking Questions?
Yang Chen, Hexiang Hu, Yi Luan, Haitian Sun, Soravit Changpinyo, Alan Ritter, Ming-Wei Chang