Uncertainty Awareness
Uncertainty awareness in machine learning focuses on developing models that not only make predictions but also quantify their own uncertainty, improving reliability and trustworthiness. Current research emphasizes probabilistic methods, such as Bayesian neural networks and quantile regression, alongside non-probabilistic approaches like interval and fuzzy learning, applied across diverse fields including structural dynamics, natural language processing, and reinforcement learning. This research is crucial for building robust and reliable AI systems, particularly in high-stakes applications where understanding the limitations of predictions is paramount, leading to improved decision-making and enhanced safety.
Papers
November 14, 2024
October 11, 2024
August 16, 2024
July 17, 2024
June 27, 2024
February 11, 2024
January 12, 2024
December 13, 2023
December 4, 2023
October 31, 2022
September 28, 2022
November 22, 2021