Paper ID: 2410.18225
Generalizations across filler-gap dependencies in neural language models
Katherine Howitt, Sathvik Nair, Allison Dods, Robert Melvin Hopkins
Humans develop their grammars by making structural generalizations from finite input. We ask how filler-gap dependencies, which share a structural generalization despite diverse surface forms, might arise from the input. We explicitly control the input to a neural language model (NLM) to uncover whether the model posits a shared representation for filler-gap dependencies. We show that while NLMs do have success differentiating grammatical from ungrammatical filler-gap dependencies, they rely on superficial properties of the input, rather than on a shared generalization. Our work highlights the need for specific linguistic inductive biases to model language acquisition.
Submitted: Oct 23, 2024