Confidence Measure
Confidence measures in machine learning aim to quantify a model's certainty in its predictions, improving reliability and trustworthiness, especially in high-stakes applications. Current research focuses on developing and improving these measures for various model types, including large language models and deep neural networks, often employing techniques like Monte Carlo dropout, entropy-based methods, and ensemble diversity. This work is crucial for enhancing the safety and usability of AI systems across diverse fields, from legal NLP and medical diagnosis to earth observation and educational technology, by providing users with a clearer understanding of prediction reliability.
Papers
September 27, 2024
September 13, 2024
July 31, 2024
July 30, 2024
June 5, 2024
April 4, 2024
March 12, 2024
January 30, 2024
December 29, 2023
October 23, 2023
July 20, 2023
May 22, 2023
October 25, 2022
July 31, 2022
June 14, 2022
June 1, 2022