ABSTRACT

Data-based model validation will compare model outputs to experimental data. Bootstrapping analysis reveals such uncertainty, when it might not be recognized from the data or the nominal model. Bootstrapping uses the uncertainty in the data, as nature decided to present it, and provides model-prediction uncertainty corresponding to the data uncertainty. Bootstrapping generates a set of model coefficient values, one for each data sampling realization. If the model functionality matches the process, and the experiment to generate data is properly performed and understood, then the residuals should have a variance that matches the propagation of variance on the experimental data model, and the model uncertainty from bootstrapping. Probable error from propagation of uncertainty on data model should be equivalent to the standard deviation of the residuals. The autocorrelation test presumes both zero bias and uniform variance throughout the data sequence, which may not be true.