Interpretable Knowledge
Interpretable knowledge research aims to make the decision-making processes of complex machine learning models, such as large language models and deep reinforcement learning agents, transparent and understandable. Current efforts focus on developing methods that integrate symbolic reasoning with neural networks, leveraging techniques like program synthesis, probabilistic logic programming, and knowledge graph augmentation to create explainable models. This work is crucial for building trust in AI systems, enabling human oversight in high-stakes applications, and facilitating the development of more robust and reliable AI technologies across diverse fields.
Papers
September 25, 2024
April 30, 2024
March 24, 2024
March 20, 2024
March 19, 2024
January 30, 2024
December 16, 2023
September 7, 2023
February 6, 2023
November 1, 2022
July 3, 2022
May 17, 2022