Explainable Decision
Explainable decision-making in artificial intelligence focuses on developing models and methods that not only produce accurate predictions but also provide understandable justifications for their choices. Current research emphasizes integrating interpretability directly into model architectures, such as through decision trees, graph transformers, and rule-based systems augmented by large language models, rather than relying solely on post-hoc explanations. This pursuit is crucial for building trust in AI systems across diverse applications, from autonomous vehicles and medical diagnosis to financial forecasting and customs classification, ensuring both reliability and accountability.
Papers
November 18, 2024
November 13, 2024
October 7, 2024
August 16, 2024
July 30, 2024
May 23, 2024
March 10, 2024
February 16, 2024
December 24, 2023
November 18, 2023
August 3, 2023
April 19, 2023
April 4, 2023
March 27, 2023
February 24, 2023
February 14, 2023
July 26, 2022
April 29, 2022
March 3, 2022