Concept Based Explanation
Concept-based explanation aims to make the decisions of complex machine learning models, particularly deep neural networks, more transparent and understandable by representing them in terms of high-level human-interpretable concepts. Current research focuses on developing methods for automatically discovering and utilizing these concepts, often employing techniques like disentangled representation learning, reinforcement learning, and generative models to create concept-based explanations, even with limited or no human annotation. This field is crucial for building trust in AI systems across various applications, from medical diagnosis to autonomous driving, by providing more insightful and reliable explanations than traditional methods.
Papers
October 20, 2024
October 16, 2024
August 24, 2024
July 27, 2024
July 22, 2024
July 19, 2024
June 30, 2024
May 13, 2024
April 29, 2024
April 23, 2024
April 13, 2024
April 4, 2024
March 21, 2024
October 16, 2023
October 11, 2023
September 21, 2023
September 1, 2023
July 13, 2023
July 10, 2023