ABSTRACT

This is the final chapter of the book. What we want to discuss here is essentially outside the preview of convex optimization. Yet as we will see, convexity will play a fundamental role in the issues discussed. We will discuss here two major areas in nonconvex optimization, namely maximization of a convex function and minimization of a d.c. function. The acronym d.c. stands for difference convex function, that is, functions expressed as the difference of two convex functions. Thus, more precisely, we would look into the following problems:

max f(x) subject to x ∈ C (P1) and min f(x)− g(x) subject to x ∈ C, (P2) where f, g : Rn → R are convex functions and C ⊂ Rn is a convex set. A large class of nonconvex optimization problems actually come into this setting. Note that (P1) can be posed as

min − f(x) subject to x ∈ C

and thus as

min φ(x)− f(x) subject to x ∈ C,

where φ is the zero function. Thus the problem (P1) can also be viewed as a special case of (P2), though we will consider them separately for a better understanding.