ABSTRACT

In the previous chapter, we addressed the problem of looking at the sensitivity and uncertainty of model predictions for cases where there were no historical data available. The results then are always conditional on the assumptions made prior to running the model and it was noted, particularly for complex model applications, how difficult it can be to be sure about some of those prior assumptions of model choice, ranges or distributions of parameter values and boundary conditions. Thus, while we may often be required to carry out purely a sensitivity analysis or forward uncertainty analysis, the problem of uncertainty estimation becomes much more interesting when there are some data available to be able to evaluate model performance and carry out an inverse problem of estimating model values (and in some cases, perhaps, of input uncertainties as well). Model calibration by history matching of an observed sequence of data has been the saving grace of most mechanistic environmental modelling. It allows a demonstration of success in modelling capability and it allows some degree of faith in model predictions. While, as we will see, there may still be no “right” answer to the inverse problem, at least we can use the data available to refine and hopefully constrain our estimates of the uncertainty associated with any model predictions.