ABSTRACT

This preliminary chapter starts with a general description of Krylov subspace methods (also called polynomial iteration methods) and introduces the particular definition of what will further on be called conjugate gradient type methods; the most important algorithms are presented in slightly more detail. Conjugate gradient type methods may be viewed as optimization techniques or as projection methods. Alternatively, they can be studied from an orthogonal polynomial point of view, and this is the framework which is chosen in the following chapters. As can be seen in Section 2.4, many elementary properties of real orthogonal polynomials have interesting implications on conjugate gradient type methods. One consequence, for example, is a way of implementing two “adjacent” conjugate gradient type methods with essentially the same costs as implementing only one scheme; this is shown in Section 2.5. The final section of this chapter deals with the stability of the numerical algorithms. One aspect is the loss of orthogonality due to finite precision arithmetic, and its impact on the performance of the methods; another point is the sensitivity with respect to data perturbations after a fixed number of steps. The operator which maps the right-hand side onto the corresponding iterate is nonlinear and may be discontinuous, but discontinuity is restricted to a small set, i.e., a set of first category. Except for Section 2.3, T is always assumed to be selfadjoint, semidefinite, with its spectrum contained in [0,1].