Inference Rule

Inference rules, the fundamental building blocks of logical reasoning, are being intensely studied in the context of large language models (LLMs), focusing on how well these models understand and apply such rules, and how to improve their performance. Current research explores both the limitations of LLMs in handling complex or compositional rules, and the development of novel frameworks and algorithms, including logic scaffolding and neurosymbolic approaches, to enhance their logical reasoning capabilities. This research is crucial for improving the reliability and trustworthiness of LLMs across diverse applications, from legal tech and knowledge graph reasoning to more general-purpose AI systems.

Papers