Paper ID: 2305.15014
Unlocking Temporal Question Answering for Large Language Models with Tailor-Made Reasoning Logic
Xingxuan Li, Liying Cheng, Qingyu Tan, Hwee Tou Ng, Shafiq Joty, Lidong Bing
The temporal aspect is a significant dimension of our reality. We notice the challenge that large language models (LLMs) face when engaging in temporal reasoning. Our preliminary experiments show that methods involving the generation of intermediate reasoning steps, such as chain-of-thought and program-aided language models, do not consistently boost the performance of complex temporal question-answering tasks. This limitation can be attributed to the LLMs' inadequate understanding of temporal information. To address this problem, we propose TempLogic, a novel framework designed specifically for temporal question-answering tasks across three levels of reasoning. TempLogic incorporates retrieval-guided context distillation, temporal data extraction, and tailor-made logic reasoning. Extensive experiments and analysis demonstrate the effectiveness of our framework in solving intricate time-bound reasoning tasks.
Submitted: May 24, 2023