Interpretable Logical
Interpretable logical reasoning aims to develop AI systems that not only solve complex problems requiring multiple reasoning steps but also provide transparent explanations for their conclusions. Current research focuses on integrating large language models with structured knowledge bases (like knowledge graphs) and employing techniques like question decomposition and submodular optimization to create more interpretable rule-based systems. This work is significant because it addresses the critical need for trustworthy and explainable AI, particularly in high-stakes domains like medical diagnosis and scientific discovery, where understanding the reasoning process is paramount.
Papers
October 2, 2023
June 16, 2022
June 8, 2022
May 19, 2022
May 17, 2022
May 2, 2022
April 29, 2022