Overconfidence Problem
The overconfidence problem in machine learning refers to models exhibiting unrealistically high confidence in their predictions, even when incorrect. Current research focuses on mitigating this issue across various model types, including large language models (LLMs), neural networks for image and tabular data, and recommendation systems, employing techniques like knowledge transfer, cautious calibration, and counterfactual explanations to improve prediction accuracy and calibration. Addressing overconfidence is crucial for building trustworthy AI systems, enhancing human-AI collaboration, and ensuring reliable deployment in high-stakes applications where miscalibration can have significant consequences.
Papers
November 4, 2024
August 9, 2024
May 27, 2024
May 21, 2024
May 5, 2024
February 18, 2024
February 12, 2024
February 2, 2024
January 12, 2024
November 16, 2023
August 14, 2023
July 4, 2023
May 18, 2023
April 24, 2023
October 21, 2022