ABSTRACT

In a broad sense, all iterative algorithms generate a sequence {xk} of vectors. The sequence may converge for any starting vector x0, or may

exists, may depend on x0, and may, or may not, solve the original problem. Convergence to the limit may be slow and the algorithm may need to be accelerated. The algorithm may involve measured data. The limit may be sensitive to noise in the data and the algorithm may need to be regularized to lessen this sensitivity. The algorithm may be quite general, applying to all problems in a broad class, or it may be tailored to the problem at hand. Each step of the algorithm may be costly, but only a few steps may be needed to produce a suitable approximate answer, or, each step may be easily performed, but many such steps are needed. Although convergence of an algorithm is important, theoretically, sometimes in practice only a few iterative steps are used. In this chapter we consider several classes of operators that play important roles in optimization. Up to now we have largely limited our discussion to real vectors and real matrices. In this chapter it is convenient to broaden the discussion to include complex vectors and matrices.