Low Complexity Explanation
Low-complexity explanation in AI focuses on developing methods to make the decision-making processes of complex models, such as deep neural networks and large language models, more understandable and interpretable. Current research emphasizes techniques like identifying key input features and interactions (e.g., AND/OR relationships), using exemplar-based clustering for explanation, and mitigating the effects of noisy data on model training and interpretation. This pursuit of simpler, more faithful explanations is crucial for building trust in AI systems and facilitating their wider adoption in various scientific and practical applications where understanding the reasoning behind a model's output is paramount.
Papers
October 19, 2024
November 25, 2023
April 26, 2023
March 23, 2023
September 20, 2022
July 17, 2022
May 18, 2022
February 8, 2022
December 19, 2021