Group zero-norm regularized robust loss minimization: proximal MM method and statistical error bound
math.OC
/ Authors
/ Abstract
This study focuses on solving group zero-norm regularized robust loss minimization problems. We propose a proximal Majorization-Minimization (PMM) algorithm to address a class of equivalent Difference-of-Convex (DC) surrogate optimization problems. First, we present the core principles and iterative framework of the PMM method. Under the Kurdyka-Łojasiewicz (KL) property assumption of the potential function, we establish the global convergence of the algorithm and characterize its local (sub)linear convergence rate. Furthermore, for linear observation models with design matrices satisfying restricted eigenvalue conditions, we derive statistical estimation error bounds between the PMM-generated iterates (including their limit points) and the ground truth solution. These bounds not only rigorously quantify the approximation accuracy of the algorithm but also extend previous results on element-wise sparse composite optimization from reference [57]. To efficiently implement the PMM framework, we develop a proximal dual semismooth Newton method for solving critical subproblems. Extensive numerical experiments on both synthetic data and the UCI benchmark demonstrate the superior computational efficiency of our PMM method compared to the proximal Alternating Direction Method of Multipliers (pADMM).