Reasoning Question
Reasoning questions in question-answering (QA) tasks challenge large language models (LLMs) to synthesize information and perform complex logical operations, going beyond simple factual recall. Current research focuses on developing benchmarks with diverse and challenging reasoning questions, exploring prompting techniques like Chain-of-Thought to improve LLM performance, and designing models that incorporate external knowledge sources or enhance interpretability. These advancements are crucial for improving LLMs' ability to handle complex real-world problems and for building more trustworthy and explainable AI systems across various applications, including personalized recommendation systems and evidence-based medicine.
Papers
Dynamic Clue Bottlenecks: Towards Interpretable-by-Design Visual Question Answering
Xingyu Fu, Ben Zhou, Sihao Chen, Mark Yatskar, Dan Roth
TACR: A Table-alignment-based Cell-selection and Reasoning Model for Hybrid Question-Answering
Jian Wu, Yicheng Xu, Yan Gao, Jian-Guang Lou, Börje F. Karlsson, Manabu Okumura