ABSTRACT

Our goal is to introduce some of the tools useful for analyzing the output of a Markov chain Monte Carlo (MCMC) simulation. In particular, we focus on methods which allow the practitioner (and others!) to have confidence in the claims put forward. The following are the main issues we will address: (1) initial graphical assessment of MCMC output; (2) using the output for estimation; (3) assessing the Monte Carlo error of estimation; and (4) terminating the simulation. Let π be a density function with support X ⊆ Rd about which we wish to make an infer-

ence. This inference often is based on some feature ofπ. For example, if g : X → R a common goal is the calculation of

Eπg = ∫ X g(x)π(x) dx. (7.1)

Wewill typically want the value of several features such as mean and variance parameters, along with quantiles and so on. As a result, the features of interest form a p-dimensional vector which we call θπ. Unfortunately, in practically relevant settings we often cannot calculate any of the components of θπ analytically or even numerically. Thus we are faced with a classical statistical problem: given a density π, we want to estimate several fixed, unknown features of it. For ease of exposition we focus on the case where θπ is univariate, but we will come back to the general case at various points throughout. Consider estimating an expectation as in Equation 7.1. The basic MCMC method entails

constructing a Markov chain X = {X0,X1,X2, . . .} on X having π as its invariant density. (See Chapter 1, this volume, for an introduction to MCMC algorithms.) Then we simulate X for a finite number of steps, say n, and use the observed values to estimate Eπg with a sample average

g¯n := 1n n−1∑ i=0

g(xi).