Concept Based Explanation
Concept-based explanation aims to make the decisions of complex machine learning models, particularly deep neural networks, more transparent and understandable by representing them in terms of high-level human-interpretable concepts. Current research focuses on developing methods for automatically discovering and utilizing these concepts, often employing techniques like disentangled representation learning, reinforcement learning, and generative models to create concept-based explanations, even with limited or no human annotation. This field is crucial for building trust in AI systems across various applications, from medical diagnosis to autonomous driving, by providing more insightful and reliable explanations than traditional methods.
Papers
May 27, 2023
April 10, 2023
March 27, 2023
November 21, 2022
November 19, 2022
November 17, 2022
November 10, 2022
September 22, 2022
August 17, 2022
July 20, 2022
May 7, 2022
April 26, 2022
April 25, 2022
March 11, 2022
March 4, 2022