Input Uncertainty

Input uncertainty, encompassing inaccuracies or randomness in model inputs, is a critical challenge in machine learning, hindering reliable predictions and robust decision-making. Current research focuses on developing methods to quantify and propagate input uncertainty through various model architectures, including neural networks and Gaussian processes, often employing techniques like conformal prediction, bootstrapping, and mixture-of-experts models to achieve better calibration and efficiency. Addressing input uncertainty is crucial for improving the reliability and safety of machine learning applications across diverse fields, from robotics and finance to medical imaging and language processing, where input noise or ambiguity is inherent.

Papers