Interpretable Logical

Interpretable logical reasoning aims to develop AI systems that not only solve complex problems requiring multiple reasoning steps but also provide transparent explanations for their conclusions. Current research focuses on integrating large language models with structured knowledge bases (like knowledge graphs) and employing techniques like question decomposition and submodular optimization to create more interpretable rule-based systems. This work is significant because it addresses the critical need for trustworthy and explainable AI, particularly in high-stakes domains like medical diagnosis and scientific discovery, where understanding the reasoning process is paramount.

Papers