Human Like Explanation

Human-like explanations in artificial intelligence aim to make machine decision-making processes understandable and trustworthy to humans. Current research focuses on developing models that generate explanations mirroring human reasoning, employing techniques like weight of evidence, foveation-based methods, and semantic graph counterfactuals to improve the clarity and accuracy of these explanations, often incorporating knowledge graphs to enhance precision. This work is crucial for building trust in AI systems across diverse applications, from educational recommendations to robot control, and for fostering more effective human-AI collaboration.

Papers