Paper ID: 2204.00498

Evaluating the Text-to-SQL Capabilities of Large Language Models

Nitarshan Rajkumar, Raymond Li, Dzmitry Bahdanau

We perform an empirical evaluation of Text-to-SQL capabilities of the Codex language model. We find that, without any finetuning, Codex is a strong baseline on the Spider benchmark; we also analyze the failure modes of Codex in this setting. Furthermore, we demonstrate on the GeoQuery and Scholar benchmarks that a small number of in-domain examples provided in the prompt enables Codex to perform better than state-of-the-art models finetuned on such few-shot examples.

Submitted: Mar 15, 2022