Confidence Difference
Confidence difference research focuses on improving the reliability and trustworthiness of machine learning models, particularly large language models (LLMs), by better aligning predicted confidence with actual accuracy. Current efforts concentrate on developing methods to calibrate model confidence, such as knowledge transfer techniques and automated annotation of reasoning steps, often leveraging confidence scores to improve model performance and identify errors. This work is crucial for enhancing the dependability of AI systems across diverse applications, from improving the accuracy of speech recognition to ensuring the safety and robustness of decision-making processes in critical domains.
Papers
May 27, 2024
January 17, 2024
October 9, 2023
September 28, 2023
May 21, 2023
March 17, 2023
May 19, 2022