ABSTRACT

The subject of this chapter lies in the area of theoretical computer science, although it is closely related to algebraic and numerical computations for sciences, engineering, and electrical engineering. Our central theme is the dramatic acceleration of the most fundamental computations with

integers, polynomials, and structured matrices by means of application of the fast Fourier transform(FFT).Moreprecisely,wefirst apply theFFT tomultiply a pair of integers, a pair of polynomials, and a structured matrix by a vector in nearly linear time, versus higher order, typically quadratic time required for the same operation in the classical algorithms. Then we extend this process to other fundamental operations by ultimately reducing them to multiplication. We state the complexity bounds under the random access machine (RAM) model of computa-

tion [1]. In most cases we assume the arithmetic model, that is, we assign a unit cost to addition, subtraction, multiplication, and division of real numbers, as well as to reading or writing them into a memory location. This model is realistic for computations with a fixed (e.g., the IEEE standard

and

double) precision, which fits the size of a computer word. In this case the arithmeticmodel turns into the word model [50]. In other cases we assume extended precision of computing and the Boolean or bit model, that is, we assign the unit cost to every Boolean or bit operation. This accounts for both arithmetic operations and the length (precision) of the operands. We always specify whether we use the arithmetic, word, or Boolean model unless this is clear from the context. Section 18.2 examines the fundamental transforms of vectors, in particular, the discrete Fourier

transform, its inverse, vector convolution, its extension to wrapped convolutions, and the sine and cosine transforms. We specify the FFT, discuss it in some detail, and apply it as the basic algorithm for computing the aforementioned transforms. Section 18.3 applies these results to some fundamental operations on univariate polynomials and

shows the correlations between the main vector transforms and polynomial arithmetic. The results are carried over to the integers and the polynomials in several variables. Section 18.4 covers applications to structured matrices, which are omnipresent in scientific and

engineering computations and signal and image processing. Themn entries of anm × n structured matrix are defined via the order of m + n parameters, which means linear memory space versus quadratic for general matrices. Based on the application of the FFT and the correlation between computations with structured matrices and polynomials, one can similarly decrease the arithmetic time complexity ofmultiplication of such amatrix by a vector andother fundamental operationswith structured matrices. This naturally leads us to the unification of structured matrix computations, which in turn helps us to improve the computations further. We practically omit the lower bound topics, referring the reader to [18,25,73] on some nontrivial

results. One can always apply the information lower bounds: since each arithmetic and each Boolean operand has two operations and one output, there must always be at least max{O, I/2} operations, whereO and I are the sizes of the output and input, respectively.We also omit certain generalizations of Fourier transforms that are of some interest in theoretical computer science; see, for instance [25]. Many important applications of the FFT to computations in sciences, engineering, and electrical engineering cannot be covered in a chapter of the present size (see the end of Section 18.2.1 and the introduction of Section 18.4). We prefer to cite books, surveys, or comprehensive articles, rather than the publications that first

contained a certain result. There is a tacit understanding that the interested reader will look into the bibliography of the cited references, in which the historical development is outlined. Our entire exposition largely draws from the survey articles [84,85] and the books [13,91]. Hereafter, (v)i = vi and (A)i,j = ai,j for a column vector v = [vi]i and a matrix A = [ai,j]i,j, log

stands for log2 unless specified otherwise, “section.name” stands for Section “section.name,” ops for arithmetic operations, WT for the transpose of a matrix or vector W, and WH for its Hermitian, that is, complex conjugate transpose.