ABSTRACT

Inspired by such algorithms, in this chapter we study a first-order method for solving certain convex optimization problems. We focus on problems of the form

min {f(x) + g(Ax) + ω(x) : x ∈ H} (3.1) over a real Hilbert spaceH. We assume that f is a convex function with Hölder continuous gradient, g a Lipschitz continuous convex function, A a bounded linear operator, and ω a convex function defined over a bounded domain. We also assume that the computational operations available are the gradient of f , the proximity operator of g, and a subgradient of the convex conjugate ω∗.1 A particularly common type of problems covered by (3.1) is

min {f(x) + g(Ax) : x ∈ C} , (3.2)

where C is a bounded, closed, convex subset of H. Common among such examples are regularization problems with one or more penalties in the objective (as the term g ◦A) and one penalty as a constraint described by C.