Intuitive Explanation
Intuitive explanation in artificial intelligence focuses on making the decision-making processes of complex models, such as large language models and deep neural networks, more transparent and understandable to humans. Current research emphasizes developing methods to visualize model reasoning, quantify feature importance, and align model outputs with human intuitions, often employing techniques like attention maps, Shapley values, and causal inference frameworks. This work is crucial for building trust in AI systems, improving their reliability in high-stakes applications (e.g., medical diagnosis), and facilitating more effective human-AI collaboration.
Papers
November 7, 2024
October 19, 2024
October 5, 2024
September 20, 2024
July 13, 2024
July 8, 2024
June 11, 2024
March 24, 2024
January 10, 2024
December 12, 2023
December 1, 2023
November 27, 2023
October 30, 2023
September 29, 2023
September 21, 2023
March 23, 2023
February 16, 2023
February 7, 2023
October 21, 2022