Paper ID: 2410.05440
Can LLMs Understand Time Series Anomalies?
Zihao Zhou, Rose Yu
Large Language Models (LLMs) have gained popularity in time series forecasting, but their potential for anomaly detection remains largely unexplored. Our study investigates whether LLMs can understand and detect anomalies in time series data, focusing on zero-shot and few-shot scenarios. Inspired by conjectures about LLMs' behavior from time series forecasting research, we formulate key hypotheses about LLMs' capabilities in time series anomaly detection. We design and conduct principled experiments to test each of these hypotheses. Our investigation reveals several surprising findings about LLMs for time series: 1. LLMs understand time series better as *images* rather than as text 2. LLMs did not demonstrate enhanced performance when prompted to engage in *explicit reasoning* about time series analysis 3. Contrary to common beliefs, LLM's understanding of time series *do not* stem from their repetition biases or arithmetic abilities 4. LLMs' behaviors and performance in time series analysis *vary significantly* across different model architectures This study provides the first comprehensive analysis of contemporary LLM capabilities in time series anomaly detection. Our results suggest that while LLMs can understand time series anomalies, many common conjectures based on their reasoning capabilities do not hold. These insights pave the way for more effective LLM-based approaches in time series analysis, bridging the gap between forecasting and anomaly detection applications.
Submitted: Oct 7, 2024