Yes No Question
Research on question answering (QA) focuses on enabling computer systems to accurately and comprehensively respond to diverse question types, moving beyond simple keyword matching to nuanced understanding of context and intent. Current efforts concentrate on improving the robustness of large language models (LLMs) and retrieval-augmented generation (RAG) systems, particularly addressing challenges like ambiguity, hallucination, and the handling of complex, multi-hop reasoning across various data sources (text, tables, knowledge graphs, and even audio). This work is significant for advancing natural language processing and holds substantial implications for applications ranging from improved search engines and chatbots to automated report generation in specialized domains like healthcare and finance.
Papers
Three Questions Concerning the Use of Large Language Models to Facilitate Mathematics Learning
An-Zi Yen, Wei-Ling Hsu
Semantic Decomposition of Question and SQL for Text-to-SQL Parsing
Ben Eyal, Amir Bachar, Ophir Haroche, Moran Mahabi, Michael Elhadad
Interpreting Indirect Answers to Yes-No Questions in Multiple Languages
Zijie Wang, Md Mosharaf Hossain, Shivam Mathur, Terry Cruz Melo, Kadir Bulut Ozler, Keun Hee Park, Jacob Quintero, MohammadHossein Rezaei, Shreya Nupur Shakya, Md Nayem Uddin, Eduardo Blanco
Can NLP Models 'Identify', 'Distinguish', and 'Justify' Questions that Don't have a Definitive Answer?
Ayushi Agarwal, Nisarg Patel, Neeraj Varshney, Mihir Parmar, Pavan Mallina, Aryan Bhavin Shah, Srihari Raju Sangaraju, Tirth Patel, Nihar Thakkar, Chitta Baral
Originality and the Future of Copyright in an Age of Generative AI
Paulius Jurcys, Mark Fenwick