Paper ID: 2111.14844

Evaluation of Machine Learning Techniques for Forecast Uncertainty Quantification

Maximiliano A. Sacco, Juan J. Ruiz, Manuel Pulido, Pierre Tandeo

Ensemble forecasting is, so far, the most successful approach to produce relevant forecasts with an estimation of their uncertainty. The main limitations of ensemble forecasting are the high computational cost and the difficulty to capture and quantify different sources of uncertainty, particularly those associated with model errors. In this work we perform toy-model and state-of-the-art model experiments to analyze to what extent artificial neural networks (ANNs) are able to model the different sources of uncertainty present in a forecast. In particular those associated with the accuracy of the initial conditions and those introduced by the model error. We also compare different training strategies: one based on a direct training using the mean and spread of an ensemble forecast as target, the other ones rely on an indirect training strategy using an analyzed state as target in which the uncertainty is implicitly learned from the data. Experiments using the Lorenz'96 model show that the ANNs are able to emulate some of the properties of ensemble forecasts like the filtering of the most unpredictable modes and a state-dependent quantification of the forecast uncertainty. Moreover, ANNs provide a reliable estimation of the forecast uncertainty in the presence of model error. Preliminary experiments conducted with a state-of-the-art forecasting system also confirm the ability of ANNs to produce a reliable quantification of the forecast uncertainty.

Submitted: Nov 29, 2021