Structured Explanation
Structured explanation in machine learning aims to provide clear, understandable justifications for model outputs, moving beyond simple feature importance to reveal the underlying reasoning process. Current research focuses on developing methods that generate explanations in various formats, including tabular data, graph structures, and natural language, often leveraging transformer networks, autoencoders, and reinforcement learning to achieve higher fidelity and efficiency. This work is crucial for enhancing the transparency, trustworthiness, and usability of AI systems across diverse applications, from healthcare and social media moderation to visual understanding and program comprehension.
Papers
October 24, 2024
October 16, 2024
September 5, 2024
September 3, 2024
August 26, 2024
July 3, 2024
June 6, 2024
May 10, 2024
February 16, 2024
January 24, 2024
January 9, 2024
November 15, 2023
September 15, 2023
July 11, 2023
July 10, 2023
June 29, 2023
May 10, 2023
February 21, 2023
February 13, 2023