Explanation Capability
Explanation capability in artificial intelligence focuses on generating understandable justifications for AI decisions, aiming to improve trust, transparency, and user understanding. Current research explores various methods for generating explanations, including those based on logic rules, conversational approaches leveraging large language models, and techniques that adapt explanations to specific user needs and tasks. This field is crucial for responsible AI development, particularly in high-stakes domains like medicine and law, where understanding the reasoning behind AI predictions is paramount for both accountability and effective human-AI collaboration.
Papers
November 19, 2024
October 7, 2024
October 5, 2024
August 29, 2024
June 26, 2024
May 31, 2024
May 22, 2024
March 28, 2024
March 14, 2024
November 16, 2023
May 20, 2023
May 19, 2023
October 13, 2022
July 23, 2022
June 13, 2022