Prediction Instability

Prediction instability, the phenomenon where seemingly minor changes in model training or data lead to significantly different predictions, is a growing concern across machine learning. Current research focuses on understanding and mitigating this instability in various model architectures, including ensembles, deep neural networks, and graph neural networks, often examining the role of factors like model updates, feature stability, and training stochasticity. Addressing prediction instability is crucial for improving the reliability, reproducibility, and trustworthiness of machine learning systems, particularly in high-stakes applications where consistent and dependable predictions are paramount.

Papers