Interpretable Forecast

Interpretable forecasting aims to create accurate predictive models while simultaneously providing insights into the factors driving those predictions. Current research focuses on hybrid models combining the accuracy of deep learning architectures like Transformers, CNNs, LSTMs, and GRUs with the interpretability of linear models or additive methods, often incorporating metadata or employing explainable AI techniques like SHAP values. This emphasis on interpretability enhances trust and facilitates better decision-making across diverse applications, from supply chain management and financial forecasting to energy planning and personalized medicine, where understanding the "why" behind a prediction is crucial.

Papers