Model Debugging
Model debugging focuses on identifying and rectifying errors in machine learning models, aiming to improve their accuracy, reliability, and explainability. Current research emphasizes interactive visualization tools, automated data slicing techniques for anomaly detection, and the development of frameworks that leverage multiple large language models (LLMs) or incorporate user feedback to iteratively refine model behavior. These advancements are crucial for building trustworthy AI systems, particularly in high-stakes applications like healthcare, by enabling more effective analysis of model performance and facilitating the identification of biases or systematic errors.
Papers
October 2, 2024
October 1, 2024
July 25, 2024
February 1, 2023
December 18, 2022
November 17, 2022
October 30, 2022