Justification Theory
Justification theory seeks to provide rigorous, understandable explanations for the outputs of complex systems, particularly in AI applications. Current research focuses on developing methods to generate these justifications, employing techniques like multi-agent systems with large language models and Bayesian approaches for extracting explanations from "black box" deep neural networks. This work is crucial for building trust and transparency in AI systems across diverse fields, from healthcare and law to recommender systems, by ensuring that decisions are not only accurate but also readily interpretable and justifiable. The ultimate goal is to create AI that is not only effective but also accountable and explainable.
Papers
June 16, 2024
April 27, 2024
March 29, 2024
March 13, 2024
December 21, 2023
May 26, 2023
November 7, 2022
August 5, 2022
June 30, 2022
May 9, 2022
February 9, 2022