Meaningful Explanation
Meaningful explanation in artificial intelligence focuses on making the decision-making processes of complex models, such as deep learning and reinforcement learning systems, understandable to both experts and non-experts. Current research emphasizes developing methods that generate explanations tailored to specific audiences and tasks, often leveraging large language models or interactive interfaces to improve comprehension. This work is crucial for building trust in AI systems, ensuring accountability, and facilitating effective human-AI collaboration across diverse applications, from recommender systems to security analysis.
Papers
October 27, 2024
June 4, 2024
May 22, 2024
April 15, 2024
January 23, 2024
December 20, 2023
November 23, 2023
September 22, 2023
August 10, 2023
January 24, 2023
September 8, 2022