Sanity Check
Sanity checks in machine learning and related fields aim to verify the reliability and trustworthiness of model explanations and predictions, addressing concerns about potential biases, manipulation, or inaccuracies. Current research focuses on developing and refining these checks for various applications, including image generation detection, saliency map interpretation, and time series classification, often employing techniques like model parameter randomization and data randomization tests, as well as comparing against simpler baseline models. These efforts are crucial for improving the transparency and accountability of AI systems, ultimately leading to more robust and reliable models across diverse scientific and practical domains.
Papers
July 18, 2024
June 27, 2024
May 3, 2024
March 25, 2024
February 29, 2024
January 12, 2024
October 25, 2023
August 15, 2023
June 4, 2023
January 20, 2023
November 22, 2022