Paper ID: 2412.13377

DateLogicQA: Benchmarking Temporal Biases in Large Language Models

Gagan Bhatia, MingZe Tang, Cristina Mahanta, Madiha Kazi

This paper introduces DateLogicQA, a benchmark with 190 questions covering diverse date formats, temporal contexts, and reasoning types. We propose the Semantic Integrity Metric to assess tokenization quality and analyse two biases: Representation-Level Bias, affecting embeddings, and Logical-Level Bias, influencing reasoning outputs. Our findings provide a comprehensive evaluation of LLMs' capabilities and limitations in temporal reasoning, highlighting key challenges in handling temporal data accurately. The GitHub repository for our work is available at this https URL

Submitted: Dec 17, 2024