Predictive Reliability
Predictive reliability focuses on assessing the trustworthiness of machine learning model predictions, particularly crucial in high-stakes applications like medicine and autonomous vehicles. Current research emphasizes methods for detecting out-of-distribution inputs and quantifying model uncertainty, often employing techniques like autoencoders, Bayesian frameworks, and feature-based approaches alongside confidence scores to improve reliability estimations. This work aims to enhance the safety and dependability of AI systems by providing tools and frameworks for evaluating prediction reliability, ultimately fostering greater trust and responsible deployment of machine learning models across diverse fields.
Papers
October 30, 2024
August 30, 2024
February 27, 2024
January 8, 2024
November 30, 2023
May 6, 2023
May 5, 2023
March 15, 2023
March 13, 2023
October 24, 2022
May 24, 2022
May 3, 2022
November 30, 2021