Valid sequential inference on probability forecast performance | Prof. Johanna Ziegel, University of Bern
Forecasting and forecast evaluation are inherently sequential tasks. Predictions are often issued on a regular basis, such as every hour, day, or month, and their quality is monitored continuously. However, the classical statistical tools for forecast evaluation are static, in the sense that statistical tests for forecast calibration or comparison are only valid if the evaluation period is fixed in advance.
Building on recent advances in sequential inference, I will discuss new tests for comparing predictive performance of probability forecasts for binary events, which allow to monitor forecasts over time without compromising type-I-error control. The methodology is illustrated with a case study on precipitation forecasts.