ABSTRACT

The bulk of this chapter presents the most widely used Markov chain simulation methods-the Gibbs sampler and the Metropolis-Hastings algorithm-in the context of our general computing approach based on successive approximation. We sketch a proof of the convergence of Markov chain simulation algorithms and present a method for monitoring the convergence in practice. We illustrate these methods in Section 11.7 for a hierarchical normal model. For most of this chapter we consider simple and familiar (even trivial) examples in order to focus on the principles of iterative simulation methods as they are used for posterior simulation. Many examples of these methods appear in the recent statistical literature (see the bibliographic note at the end of this chapter) and also in Parts IV and V of this book. Appendix C shows the details of implementation in the computer languages R and Bugs for the educational testing example from Chapter 5.