Detection of Uncertainty in Exceedance

Detecting uncertainty in exceedance, or the prediction of events exceeding a certain threshold, is crucial for improving the reliability of machine learning models across various applications. Current research focuses on quantifying and leveraging uncertainty in diverse domains, including object detection and tracking, semantic segmentation, and table structure recognition, employing techniques like Bayesian neural networks, test-time augmentation, and attention mechanisms within transformer architectures. This work aims to enhance the robustness and trustworthiness of AI systems, particularly in safety-critical areas such as autonomous driving and medical diagnosis, by providing more reliable predictions and identifying potentially erroneous outputs. The ability to accurately assess and mitigate uncertainty is vital for building more dependable and explainable AI systems.

Papers