Paper ID: 2306.01009
Examining the Emergence of Deductive Reasoning in Generative Language Models
Peter Belcak, Luca A. Lanzendörfer, Roger Wattenhofer
We conduct a preliminary inquiry into the ability of generative transformer models to deductively reason from premises provided. We observe notable differences in the performance of models coming from different training setups and find that the deductive reasoning ability increases with scale. Further, we discover that the performance generally does not decrease with the length of the deductive chain needed to reach the conclusion, with the exception of OpenAI GPT-3 and GPT-3.5 models. Our study considers a wide variety of transformer-decoder models, ranging from 117 million to 175 billion parameters in size.
Submitted: May 31, 2023