ABSTRACT

This section is devoted to the various versions of the maximum principle used in this book. We begin by reviewing the model operator ∆.

Given an open set Ω ⊂ RN , a function u ∈ C2(Ω), and a point x0 ∈ Ω, the Taylor expansion of u at x0 is given by

u(x0+ h) = u(x0) + Du(x0).h+ 1

2 D2u(x0+ th)(h, h), (A.1)

where |h| is small, t is some number in the interval (0, 1) and where Du ∈ L (RN ,R), D2u ∈ B(Rn ×RN ,R) denote the first-and second-order differentials of u. The gradient of u at x0 is defined as the unique vector in RN such that ∇u(x0) · h = Du(x0).h for all h ∈ RN (where a · b is the canonical inner product of a, b ∈ RN ). The Hessian matrix of u at x0 is the unique matrix in RN ×RN such that (Hu(x0).h) · h= D2u(x0)(h, h) for all h ∈ RN . In particular, if x1, x2, . . . , xN denote coordinates in an orthonormal basis of RN , (A.1) can be rewritten in the familiar form

u(x0+ h) = u(x0) +∇u(x0) · h+ 12(Hu(x0+ th).h) · h

= u(x0) + N∑

∂ u

∂ x i (x0)hi +

∂ 2u

∂ x i∂ x j (x0+ th)hih j.