Human Uncertainty
Human uncertainty, inherent in perception and judgment, significantly impacts the reliability and effectiveness of AI systems, particularly in evaluating model performance and human-in-the-loop applications. Current research focuses on quantifying and incorporating this uncertainty, employing methods like Bayesian neural networks and hierarchical reinforcement learning to improve model robustness and decision-making under uncertainty. Addressing human uncertainty is crucial for developing trustworthy AI systems, improving the accuracy of automated evaluations, and enhancing the safety and reliability of AI in safety-critical domains.
Papers
October 3, 2024
June 12, 2024
April 24, 2024
March 22, 2023
December 13, 2022