Publication | Closed Access
On the Performance of Sparse Recovery Via $\ell_p$-Minimization $(0 \leq p \leq 1)$
58
Citations
32
References
2011
Year
Numerical AnalysisSparse RepresentationEngineeringSparse Recovery ViaWeak RecoveryCompressive SensingSignal ReconstructionRandom Gaussian MatrixAtomic DecompositionInverse ProblemsComputer ScienceSparse VectorsRegularization (Mathematics)Random Matrix TheoryLow-rank ApproximationLinear Optimization
It is known that a high-dimensional sparse vector x* in TV can be recovered from low-dimensional measurements y = Ax* where A <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">m×n</sup> (m <; n) is the measurement matrix. In this paper, with A being a random Gaussian matrix, we investigate the recovering ability of ℓ <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">p</sub> -minimization (0 ≤ p ≤ 1) as p varies, where ℓ <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">p</sub> -minimization returns a vector with the least ℓ <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">p</sub> quasi norm among all the vectors x satisfying Ax = y. Besides analyzing the performance of strong recovery where ℓ <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">p</sub> -minimization is re quired to recover all the sparse vectors up to certain sparsity, we also for the first time analyze the performance of "weak" recovery of ℓ <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">p</sub> -minimization (0 ≤ p <; 1) where the aim is to recover all the sparse vectors on one support with a fixed sign pattern. When α(:= <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">m</sup> / <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">n</sub> ) → 1, we provide sharp thresholds of the sparsity ratio (i.e., percentage of nonzero entries of a vector) that differentiates the success and failure via ℓ <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">p</sub> -minimization. For strong recovery, the threshold strictly decreases from 0.5 to 0.239 as p increases from 0 to 1. Surprisingly, for weak recovery, the threshold is 2/3 for all p in [0, 1), while the threshold is 1 for ℓ <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1</sub> -minimization. We also explicitly demonstrate that ℓ <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">p</sub> -minimization (p <; 1) can re turn a denser solution than ℓ <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">p</sub> -minimization. For any a G (0.1), we provide bounds of the sparsity ratio for strong recovery and weak recovery, respectively, below which ℓ <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">p</sub> -minimization succeeds. Our bound of strong recovery improves on the existing bounds when a is large. In particular, regarding the recovery threshold, this paper argues that ℓ <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">p</sub> -minimization has a higher threshold with smaller p for strong recovery; the threshold is the same for all p for sectional recovery; and ℓ <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">p</sub> -minimization can outperform ℓ <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">p</sub> -minimization for weak recovery. These are in contrast to traditional wisdom that ℓ <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">p</sub> -minimization, though computationally more expensive, always has better sparse recovery ability than ℓ <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">0</sub> -minimization since it is closer to ℓ <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1</sub> -minimization. Finally, we provide an intuitive explanation to our findings. Numerical examples are also used to un ambiguously confirm and illustrate the theoretical predictions.
| Year | Citations | |
|---|---|---|
Page 1
Page 1