Temporal Reasoning
Temporal reasoning, the ability of machines to understand and process information about time and events, is a crucial area of artificial intelligence research focused on improving the accuracy and robustness of models in handling temporal relationships. Current research emphasizes enhancing large language models (LLMs) and other architectures through techniques like graph-based representations, contrastive learning, and the integration of temporal logic, aiming to overcome limitations in handling complex temporal scenarios, including multi-hop reasoning and co-temporal events. These advancements are significant for various applications, including question answering, video understanding, and knowledge graph reasoning, ultimately leading to more sophisticated and reliable AI systems capable of interacting with dynamic real-world data.
Papers
Narrative-of-Thought: Improving Temporal Reasoning of Large Language Models via Recounted Narratives
Xinliang Frederick Zhang, Nick Beauchamp, Lu Wang
Temporal Relational Reasoning of Large Language Models for Detecting Stock Portfolio Crashes
Kelvin J.L. Koa, Yunshan Ma, Ritchie Ng, Huanhuan Zheng, Tat-Seng Chua
Test of Time: A Benchmark for Evaluating LLMs on Temporal Reasoning
Bahare Fatemi, Mehran Kazemi, Anton Tsitsulin, Karishma Malkan, Jinyeong Yim, John Palowitch, Sungyong Seo, Jonathan Halcrow, Bryan Perozzi
Living in the Moment: Can Large Language Models Grasp Co-Temporal Reasoning?
Zhaochen Su, Juntao Li, Jun Zhang, Tong Zhu, Xiaoye Qu, Pan Zhou, Yan Bowen, Yu Cheng, Min zhang