ABSTRACT

Iterative solvers for sparse systems of linear equations are perhaps the most important algorithms found in parallel application codes. Sparse matrices appear in finite difference and finite element methods for solving partial differential equations as well as in graph algorithms and artificial intelligence applications. The basic ingredients of these algorithms are matrix-vector multiplication and the dot-product operation. This chapter applies the fundamental techniques described in previous chapters to obtain a parallel conjugate gradient algorithm. Other Krylov-based algorithms involve the same fundamental operations, and conversion of other sparse solvers from sequential versions to parallel versions is straightforward after understanding the coversion of the conjugate gradient algorithm. The book does not discuss preconditioning, not because it is unimportant but because it is beyond the book’s scope.