Level Explanation
Level explanation research aims to understand and interpret the decision-making processes of complex machine learning models, particularly deep neural networks, by analyzing their internal workings at multiple levels of abstraction, from individual features to high-level concepts. Current research focuses on developing methods to generate faithful and interpretable explanations across various model architectures, including vision transformers, graph neural networks, and large language models, often employing techniques like Shapley values, knowledge distillation, and hierarchical clustering. This work is crucial for building trust in AI systems, improving model robustness and fairness, and facilitating the development of more effective and understandable AI tools across diverse applications.