ABSTRACT

The goal of Chapter 16 is to discuss model selection criteria for comparing competing models with respect to the same statistical environment. The first model selection criterion discussed is an extension of the Akaike Information Criterion (AIC) designed for estimating the expected value of an empirical risk function on a test data set using parameter estimates obtained from a different training data set. The second model selection criterion considered is a new extension of the Bayesian Information Criterion (BIC) which is used to estimate the marginal likelihood of a typical data set given a model using a deterministic Laplace approximation. The third set of model selection criteria considered in the chapter are model misspecification criteria for measuring the evidence a given probability model is misspecified with respect to a particular statistical environment. In addition, to reviewing the classical nested models method for assessing model misspecification, new methods for assessing model misspecification using the information matrix equality are discussed.