ABSTRACT

This chapter deals with verifying probabilistic forecasts. Differing from the case of verifying deterministic forecasts, probabilistic forecast verification follows the paradigm of maximizing sharpness subject to calibration. Sharpness refers to the spread of predictive distribution, and is a property of forecasts only. On the other hand, calibration denotes the statistical consistency between forecasts and observations, or put it differently, the agreement between the forecaster's judgment and Nature's choice. Both calibration and sharpness can be assessed quantitatively and visually. Various scoring rules for probabilistic forecast verification are scrutinized, and their relationships clarified. Following that, the three most commonly used tools for visual assessment of calibration are presents; they are rank histogram, probability integral transform histogram, and reliability diagram. Similar to the previous chapter, this chapter also ends with a case study that demonstrates a complete workflow of probabilistic forecast verification.