Justification System
Justification systems are formal frameworks designed to explain the reasoning behind decisions made by artificial intelligence systems, particularly in applications like fact-checking, automated grading, and recommender systems. Current research emphasizes developing models that generate clear and understandable justifications, often employing neuro-symbolic architectures that combine neural networks with logical reasoning or leveraging retrieval-augmented language models. This work aims to improve the transparency and trustworthiness of AI, enhancing user understanding and facilitating the debugging and refinement of complex algorithms.
Papers
July 9, 2024
March 4, 2024
January 16, 2024
February 13, 2023
November 7, 2022
August 5, 2022