ABSTRACT

In Chapter 5, the stability issue of linear sparse optimization methods was addressed under the so-called weak range space property of order k of transposed sensing matrices. Nonlinear sparse optimization methods have also attracted plenty of attention in statistical learning, sparse data recovery and mathematical optimization. In this chapter, we extend the results established for linear sparse optimization to nonlinear sparse optimization methods under the same hypothesis. We prove that the weak RSP assumption is still sufficient to ensure that some nonlinear sparse optimization methods are stable in sparse data recovery, including 1-minimization with an 2-norm constraint, nonlinear Dantzig Selector (DS) and LASSO. A by-product of this investigation is a unified stability theorem for these nonlinear sparse optimization methods under various matrix properties, including the RIP condition δ2k < 1/

√ 2, standard NSP of order k and RSP of order

k. The recovery error bounds for DS and LASSO are established in terms of the Robinson’s constant, instead of the conventional RIP and NSP constants. Different from the standard tool used in this area, the analysis in this chapter is carried out deterministically, and the key analytic tools used here include the error bound for linear systems due to Hoffman [133] and Robinson [192] and the polytope approximation of the unit balls due to Dudley [92] and Barvinok [21]. It turns out that the weak RSP of order k of AT is a necessary and sufficient condition for the linear DS being stable in sparse vector recovery. As a result, the weak RSP assumption cannot be relaxed for the standard 1-minimization and the standard DS in order to guarantee the stable recovery of sparse vectors via these methods.