Abstract Interpretation
Abstract interpretation aims to understand the internal workings and decision-making processes of complex systems, particularly machine learning models. Current research focuses on developing methods to explain model predictions, analyze feature importance, and uncover underlying algorithms, often employing techniques from dynamical systems, information theory, and category theory, and utilizing architectures like transformers, recurrent neural networks, and graph neural networks. This field is crucial for building trust in AI systems across diverse applications, from medical diagnosis and legal analysis to engineering and scientific discovery, by providing insights into model behavior and identifying potential biases or limitations.
Papers
November 18, 2024
November 7, 2024
October 30, 2024
October 27, 2024
September 28, 2024
September 16, 2024
August 5, 2024
July 26, 2024
July 18, 2024
July 17, 2024
July 1, 2024
June 27, 2024
June 20, 2024
June 17, 2024
June 4, 2024
May 17, 2024
April 10, 2024
March 29, 2024
March 6, 2024