Abstract Interpretation
Abstract interpretation aims to understand the internal workings and decision-making processes of complex systems, particularly machine learning models. Current research focuses on developing methods to explain model predictions, analyze feature importance, and uncover underlying algorithms, often employing techniques from dynamical systems, information theory, and category theory, and utilizing architectures like transformers, recurrent neural networks, and graph neural networks. This field is crucial for building trust in AI systems across diverse applications, from medical diagnosis and legal analysis to engineering and scientific discovery, by providing insights into model behavior and identifying potential biases or limitations.
Papers
April 13, 2023
April 11, 2023
February 11, 2023
January 26, 2023
December 12, 2022
December 2, 2022
November 30, 2022
November 28, 2022
November 18, 2022
October 18, 2022
August 31, 2022
August 8, 2022
August 6, 2022
August 5, 2022
August 2, 2022
May 24, 2022
April 12, 2022
March 31, 2022
March 25, 2022