Paper ID: 2209.11628
A Neural Model for Regular Grammar Induction
Peter Belcák, David Hofer, Roger Wattenhofer
Grammatical inference is a classical problem in computational learning theory and a topic of wider influence in natural language processing. We treat grammars as a model of computation and propose a novel neural approach to induction of regular grammars from positive and negative examples. Our model is fully explainable, its intermediate results are directly interpretable as partial parses, and it can be used to learn arbitrary regular grammars when provided with sufficient data. We find that our method consistently attains high recall and precision scores across a range of tests of varying complexity.
Submitted: Sep 23, 2022