Publication | Closed Access
Practical Approximate Solutions to Linear Operator Equations When the Data are Noisy
804
Citations
22
References
1977
Year
Numerical AnalysisSpectral TheoryEngineeringVariational AnalysisFunctional AnalysisNumerical ComputationUncertainty QuantificationSignal ReconstructionPractical Approximate SolutionsRegularization (Mathematics)Approximation TheoryConvergence AnalysisLow-rank ApproximationLinear Operator EquationsInverse ProblemsHilbert SpaceWeighted Cross-validationApproximation MethodWhite Noise
We consider approximate solutions $f_{n,\lambda } $ to linear operator equations $\mathcal{K}f = g$, of the form: $f_{n,\lambda } $ is the minimizer in $\mathcal{H}$ of $({1 / n})\sum _{j = 1}^n {[(\mathcal{K}h)(t_j ) - y(t_j )]} ^2 + \lambda \| h \|^2 $, where $\mathcal{H}$ is a Hilbert space, and the data $\{ {y(t_j )} \}$ satisfy $y(t_j ) = g(t_j ) + \varepsilon (t_j )$, the $\{ {\varepsilon (t_j )} \}$ being measurement errors. $f_{n,\lambda } $ is the so-called regularized solution, and $\lambda > 0$ is the regularization parameter, to be chosen. It is important to choose $\lambda $ correctly. The purpose of this paper is to propose the method of weighted cross-validation for choosing $\lambda $from the data. We suppose that g is very smooth and the errors are white noise. It is shown that the weighted cross-validation estimate $\hat \lambda $ estimates the value of $\lambda $ which minimizes $({1 / n})E\sum\nolimits_{j = 1}^n {[(\mathcal{K}f_{n,\lambda } )(t_j ) - (\mathcal{K}f)(t_j )]} ^2 $ . Results related to the convergence of $\| {f - f_{n,\hat \lambda } } \|$, including rates, are obtained.
| Year | Citations | |
|---|---|---|
Page 1
Page 1