Paper ID: 2201.06515
Differentiable Rule Induction with Learned Relational Features
Remy Kusters, Yusik Kim, Marine Collery, Christian de Sainte Marie, Shubham Gupta
Rule-based decision models are attractive due to their interpretability. However, existing rule induction methods often result in long and consequently less interpretable rule models. This problem can often be attributed to the lack of appropriately expressive vocabulary, i.e., relevant predicates used as literals in the decision model. Most existing rule induction algorithms presume pre-defined literals, naturally decoupling the definition of the literals from the rule learning phase. In contrast, we propose the Relational Rule Network (R2N), a neural architecture that learns literals that represent a linear relationship among numerical input features along with the rules that use them. This approach opens the door to increasing the expressiveness of induced decision models by coupling literal learning directly with rule learning in an end-to-end differentiable fashion. On benchmark tasks, we show that these learned literals are simple enough to retain interpretability, yet improve prediction accuracy and provide sets of rules that are more concise compared to state-of-the-art rule induction algorithms.
Submitted: Jan 17, 2022