ABSTRACT

In general, sparse approximation problems have been of great interest given its wide applicability both in signal and image processing field as in statistical inference contexts, where many of the problems to be solved involve the undetermined linear systems sparse solutions determination. The literature on sparsity optimization is rapidly increasing (see [1, 2] and references therein). More recently sparsity techniques are also receiving increased attention in the optimal control community [3,4]. Given an input signal y, sparse approximation problems resolution aims an approximated signal determination x through a linear combination of elementary signals, which are, for several current applications, extracted from a set of signals not necessarily lineary independent. A preference for sparse linear combinations is imposed by penalizing nonzero coefficients. The most common penalty is the number of elementary signals that participate in the approximation. On account of the combinatorial nature of the sparse approximation problems, which is due to the presence of the quasi-norm 0 of the coefficients vector to be estimated, these problems have a difficult computational resolution. In general, these optimization problems are NP-hard problems [5]. Among the different approaches used to overcome this difficulty, convex relaxation is the most widely used, which shall replace the quasi-norm 0 by a related convex function. Given a specific model formulation and a data set, the 1 norm is a selective convex function, which assigns a high number of

where φ represents the overcomplete dictionary synthesis matrix; if x ∈ℜn and y∈ℜk, φ is an k × n matrix. The nonnegative parameter λ states a compromise between the approximation mean squared error and his sparsity level. The objective function is composed by two terms, one of which being aquadratic term 2 of adjustment to the observed signal y, and the other the 1 norm of the coefficient vector to estimate x. The unconstrained optimization problem represented in (1) is the convex relaxation of the subset selection problem, checking to be a major problem in many application areas. Due to its high applicability, there has been considerable effort made by the scientific community, regarding techniques and algorithms development for its resolution. Among the different proposed algorithms, are homotopy algorithms [10], the ones based on interior-point methods [11, 9], the majorization-minimization class algorithms [12] and the gradient projection algorithm [13]. The previous optimization problem is an instance of the general class of the optimization problems 1 – q, where q and p can assume values in the range [0, 2]. It is important to stress that the data term generalization to a q norm, instead of the 2 norm, gives to the approximation criterion statistical strength features (when q < 2) [14], making it less permeable to spurious observations (outliers). On the other hand, when it comes to generalization of the coefficient term to be estimated to a p norm, instead a 1 norm, and considering p < 1, we walk toward the original combinatorial problem resolution.