ABSTRACT

Implementing Gibbs sampling or other MCMC algorithms requires repeated evaluation of various full conditional density functions. In the case of hierarchical models built from random effects using Gaussian processes, this requires repeated evaluation of the likelihood and/or joint or conditional densities arising under the Gaussian process; see Section A.2. In particular, such computation requires evaluation of quadratic forms involving the inverse of covariance matrix and also the determinant of that matrix. Strictly speaking, we do not have to obtain the inverse in order to compute the quadratic form. Letting zTA−1z denote a general object of this sort, if we obtain A

1 2 (e.g. a triangular Cholesky factorization) and

solve z = A 1 2v for v, then vTv = zTA−1z. Still, with large n, computation associated with

resulting n×n matrices can be unstable, and repeated computation (as for simulation-based model fitting) can be very slow, perhaps infeasible. After all, spatial covariance matrices, in general, are dense and the Cholesky factorization requires about O(n3/3) flops. We refer to this situation informally as “the big n problem.”