Computing one-bit compressive sensing via zero-norm regularized DC loss model and its surrogate
/ Authors
/ Abstract
One-bit compressed sensing is very popular in signal processing and communications due to its low storage costs and hardware complexity, but it is challenging to recover the signal by the one-bit information. In this paper, we propose a zero-norm regularized smooth difference of convexity (DC) loss model and derive a family of equivalent nonconvex surrogates covering the MCP and SCAD ones. Compared with the existing models, the new model and its SCAD surrogate have better robustness. To apply the proximal gradient (PG) methods with extrapolation to compute their τ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\tau $$\end{document}-critical points, we provide the expression of the proximal mapping of the zero-norm (resp. ℓ1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ell _1$$\end{document}-norm) plus the indicator of unit sphere. In particular, we prove that under a mild condition, the objective functions of the proposed model and its SCAD surrogate are the KL function of exponent 0, so that the PG methods with extrapolation applied to them possess a local R-linear convergence rate and the PG methods applied to them have a finite termination. Numerical comparisons with several state-of-art methods show that in terms of the quality of solution, the proposed models are remarkably superior to the ℓp\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ell _p$$\end{document}-norm regularized models, and are comparable even superior to those models with a sparsity constraint involving the true sparsity and the sign flip ratio as inputs.
Journal: Journal of Global Optimization