ABSTRACT

The Krylov subspace method is iterative and depends on a linear operator A from S to S and on the semi-norm (2.4). It is similar to the GMRES algorithm of Saad and Schultz (1986), except that A need not depend on the interpolation equations and the semi-norm need not be a norm. We let k be the iteration number of the method, and we let Sk be the linear subspace of S that is spanned by the functions A 3 s * , j = 1,2, . . . , k, where s* is still the required interpolant. If the k-th iteration is not the final one, then its main task is to calculate an element of Sk, say Sk+i, that satisfies the condition

for the semi-norm (2.4). An implementation of this method will be described that has the following

useful properties for the choice of A that is given in Section 4. If there are no computer rounding errors, and if a tolerance parameter is set to zero, then the required interpolant s* is obtained. The sequence of semi-norms ||s* — Sj||, j — 1, 2, . . . , fe*, decreases strictly monotonically, where s i is the zero function and where k* is the number of the final iteration. Condition (3.1) is achieved for every integer k in [1, k*— 1]. The coefficients of the functions A 3 s* ^ j — 1,2, . . . , k, are not found explicitly, because these functions tend to provide a basis of Sk that is highly ill-conditioned. Instead, for k — 1, 2, . . . , the function Sk+i is calculated from Sk by applying the formula

«/e + l — &k ~I-d>k ( *^ )^

in a way that is analogous to an iteration of the conjugate gradient method. Specifically, dk is the search direction

dk = A (s* — Sk) + fik d k - i , (3-3)

the multiplier E1Z being defined by the orthogonality condition (dk ,dk~ i) = 0, except that the fikdk-i term is deleted when k = 1, so si = 0 implies d\ — As*. The step-length E 1Z of formula (3.2) is derived from inequality (3.1). The calculation ends when the function values Sfc+i^), i = 1,2, . . . ,n, are sufficiently close to the data /¿, ¿ = 1, 2, . . . , n, the termination condition being the requirement

kfc+ife)-/*| < £, ¿=1 ,2 , . . . ,n, (3.4)

tries to choose ak in equation (3.2) to satisfy inequality (3.4), the calculation being completed if and only if the attempt is successful. It is necessary for A to be such that the attempt gives termination in the case ||s*— Sk\\ = 0, because then the usual reduction ||s*— Sfc+i|| < ||s* — sk\\ cannot be attained. The construction of the search directions provides the orthogonality conditions

<d*,dj) = 0, ¿ = 1 , 2 , . . . , * - 1 , (3.5)

the particular A that is specified in Section 4, we are going to establish that it is sufficient if A has the properties

(3.6)

Then only these properties will have to be verified in Section 4. If the required interpolant s* is in the space IIm, then polynomial re­

(5^ - ai s * - a i c?i) = ||^-s2||2, a i E l l . (3.7)

Now s* £ nm implies d\ = As* (£ IIm, because otherwise A would map not only n m but also s* into IIm, which would contradict nonsingularity. Therefore (du dx) is positive, as stated in the second paragraph of Section 2. It follows that the minimization of expression (3.7) defines ot\ = (s*, di) /(di , di) uniquely. Further, ai is positive, because s*^IIm, d i = A s * and the ellipticity condition (3.6) provide (s*,di )>0. Hence, if k * > 2 , we deduce the strict inequality

||^-S2||<||^ || = ||^-51||. (3.8)

by k — 1. Thus Sk and dk-i are in the space Sk - i - It follows from the definition of Sk and from equations (3.3) and (3.2) that, for any choice of /3k and a^, both dk and s^+i are in the space Sk-Further, because the inductive hypothesis includes the relations sk = s / c - i + » i c - i4 - i and \\s* — s&|| < ||s* —Sfc_i||, the value of (dk-i , dk~i) is positive. Hence equation (3.3) and the orthogonality condition (dk ,d k-1) = 0 define the parameter (3k uniquely.