Adaptive minimax optimality in statistical inverse problems via SOLIT -- Sharp Optimal Lepskii-Inspired Tuning
math.ST
/ Authors
/ Abstract
We consider statistical linear inverse problems in separable Hilbert spaces and filter-based reconstruction methods of the form $\hat f_α= q_α\left(T^*T\right)T^*Y$, where $Y$ is the available data, $T$ the forward operator, $\left(q_α\right)_{α\in \mathcal A}$ an ordered filter, and $α> 0$ a regularization parameter. Whenever such a method is used in practice, $α$ has to be appropriately chosen. Typically, the aim is to find or at least approximate the best possible $α$ in the sense that mean squared error (MSE) $\mathbb E [\Vert \hat f_α- f^\dagger\Vert^2]$ w.r.t.~the true solution $f^\dagger$ is minimized. In this paper, we introduce the Sharp Optimal Lepskiĭ-Inspired Tuning (SOLIT) method, which yields an a posteriori parameter choice rule ensuring adaptive minimax rates of convergence. It depends only on $Y$ and the noise level $σ$ as well as the operator $T$ and the filter $\left(q_α\right)_{α\in \mathcal A}$ and does not require any problem-dependent tuning of further parameters. We prove an oracle inequality for the corresponding MSE in a general setting and derive the rates of convergence in different scenarios. By a careful analysis we show that no other a posteriori parameter choice rule can yield a better performance in terms of the order of the convergence rate of the MSE. In particular, our results reveal that the typical understanding of Lepski\uı-type methods in inverse problems leading to a loss of a log factor is wrong. In addition, the empirical performance of SOLIT is examined in simulations.