Prediction Instability
Prediction instability, the phenomenon where seemingly minor changes in model training or data lead to significantly different predictions, is a growing concern across machine learning. Current research focuses on understanding and mitigating this instability in various model architectures, including ensembles, deep neural networks, and graph neural networks, often examining the role of factors like model updates, feature stability, and training stochasticity. Addressing prediction instability is crucial for improving the reliability, reproducibility, and trustworthiness of machine learning systems, particularly in high-stakes applications where consistent and dependable predictions are paramount.
Papers
October 21, 2024
July 3, 2024
February 12, 2024
July 19, 2023
May 18, 2023
April 6, 2023
July 16, 2022
June 22, 2022