Inference Rule
Inference rules, the fundamental building blocks of logical reasoning, are being intensely studied in the context of large language models (LLMs), focusing on how well these models understand and apply such rules, and how to improve their performance. Current research explores both the limitations of LLMs in handling complex or compositional rules, and the development of novel frameworks and algorithms, including logic scaffolding and neurosymbolic approaches, to enhance their logical reasoning capabilities. This research is crucial for improving the reliability and trustworthiness of LLMs across diverse applications, from legal tech and knowledge graph reasoning to more general-purpose AI systems.
Papers
June 21, 2024
February 18, 2024
November 20, 2023
April 6, 2023
March 17, 2023
March 10, 2023
February 15, 2023
February 14, 2023
February 3, 2023