Explanation Selection

Explanation selection in artificial intelligence focuses on choosing the most effective and understandable explanations for complex model decisions, aiming to improve human-AI interaction and trust. Current research emphasizes developing methods to tailor explanations to diverse user expertise levels, leveraging large language models (LLMs) to refine and generate human-readable narratives from existing explanation algorithms, and employing techniques like chain-of-thought prompting and model canonization to optimize explanation quality. This field is crucial for responsible AI development, impacting areas like education, scientific discovery, and high-stakes applications such as recruitment by ensuring that AI systems are transparent and their reasoning is readily accessible to users.

Papers