Human Like Explanation
Human-like explanations in artificial intelligence aim to make machine decision-making processes understandable and trustworthy to humans. Current research focuses on developing models that generate explanations mirroring human reasoning, employing techniques like weight of evidence, foveation-based methods, and semantic graph counterfactuals to improve the clarity and accuracy of these explanations, often incorporating knowledge graphs to enhance precision. This work is crucial for building trust in AI systems across diverse applications, from educational recommendations to robot control, and for fostering more effective human-AI collaboration.
Papers
October 15, 2024
September 18, 2024
August 4, 2024
March 11, 2024
March 5, 2024
October 20, 2023
March 9, 2023
December 13, 2022
September 6, 2022