ABSTRACT

One approach for finding a K-sparse estimate of x is by solving the following optimization problem

xˆ? = arg min x ‖y−Φx‖22 subject to ‖x‖0 ≤ K (10.3)

where ‖ · ‖0 denotes the `0 pseudo-norm, ‖x‖0 = #{j : xj 6= 0}, and ‖ · ‖p for p ≥ 1 denotes the usual `p-norm, ‖x‖p =

(∑N i=1 |xi|p

)1/p. This optimization problem is known to be NP-hard and hence suboptimal strategies have been under active research; see [8] for a review. These methods, such as Compressive Sampling Matching Pursuit (CoSaMP) [14] or Iterative Hard Thresholding (IHT) [2, 3] are guaranteed to perform very well provided that suitable conditions (e.g., restricted isometry property on Φ and non impulsive noise conditions) are met. It is important to notice that the derived recovery bounds depend linearly on ‖ε‖2 and thus the methods are not guaranteed to provide accurate reconstruction/approximation under heavy-tailed (spiky) non-Gaussian noise. An alternative approach to enforce sparsity of the solution is to deploy `1 constraint on the signal, i.e., ‖x‖1 ≤ δ, as is done in the celebrated LASSO method [18].