Paper ID: 2310.16654
ChatGPT is a Potential Zero-Shot Dependency Parser
Boda Lin, Xinyi Zhou, Binghao Tang, Xiaocheng Gong, Si Li
Pre-trained language models have been widely used in dependency parsing task and have achieved significant improvements in parser performance. However, it remains an understudied question whether pre-trained language models can spontaneously exhibit the ability of dependency parsing without introducing additional parser structure in the zero-shot scenario. In this paper, we propose to explore the dependency parsing ability of large language models such as ChatGPT and conduct linguistic analysis. The experimental results demonstrate that ChatGPT is a potential zero-shot dependency parser, and the linguistic analysis also shows some unique preferences in parsing outputs.
Submitted: Oct 25, 2023