ABSTRACT

This chapter focuses on evaluating models. Model criticism and comparison are core activities in statistical modeling generally, and Bayesian psychometric modeling is no exception. Models are convenient fictions—fictions because they are necessarily incorrect, convenient because they are handy tools for representing, processing, and communicating information. Model checking aims to evaluate the fit of the data and the model, and proceeds according to a particular logic. The chapter describes the machinery associated with model checking, but exactly which residuals, statistics, and discrepancy measures should be pursued depends on the model and the purpose as well. Model comparison is to critically evaluate each model under consideration using one or more of the techniques just described and compare the results. A classic Bayesian procedure for comparing competing models uses the Bayes factor. An alternative method for approximating the Bayes factor is based on transformations of the Bayesian information criterion.