Paper ID: 2410.05440 • Published Oct 7, 2024
Can LLMs Understand Time Series Anomalies?
Zihao Zhou, Rose Yu
TL;DR
Get AI-generated summaries with premium
Get AI-generated summaries with premium
Large Language Models (LLMs) have gained popularity in time series
forecasting, but their potential for anomaly detection remains largely
unexplored. Our study investigates whether LLMs can understand and detect
anomalies in time series data, focusing on zero-shot and few-shot scenarios.
Inspired by conjectures about LLMs' behavior from time series forecasting
research, we formulate key hypotheses about LLMs' capabilities in time series
anomaly detection. We design and conduct principled experiments to test each of
these hypotheses. Our investigation reveals several surprising findings about
LLMs for time series: (1) LLMs understand time series better as images rather
than as text, (2) LLMs do not demonstrate enhanced performance when prompted to
engage in explicit reasoning about time series analysis. (3) Contrary to common
beliefs, LLMs' understanding of time series does not stem from their repetition
biases or arithmetic abilities. (4) LLMs' behaviors and performance in time
series analysis vary significantly across different models. This study provides
the first comprehensive analysis of contemporary LLM capabilities in time
series anomaly detection. Our results suggest that while LLMs can understand
trivial time series anomalies, we have no evidence that they can understand
more subtle real-world anomalies. Many common conjectures based on their
reasoning capabilities do not hold. All synthetic dataset generators, final
prompts, and evaluation scripts have been made available in
this https URL