The paper presents first-projection-then-regularization hybrid algorithms for large-scale general-form regularization. They are based on a subspace projection method where the matrix $A$ is first projected onto a subspace, typically a Krylov subspace, which is implemented via the Golub-Kahan bidiagonalization process applied to $A$, with starting vector $b$. Then we employ a regularization term to the projections. Finally, an iterative algorithm is exploited to solve the resulting inner least squares problems. The resulting algorithms are called \emph{hybrid CGME(hyb-CGME)} and \emph{hybrid TCGME(hyb-TCGME)}. We first prove that the inner least squares problems become better conditioned as $k$ increases, so that the iterative algorithms converge faster. Then we prove how to select the stopping tolerances for {hyb-CGME} and {hyb-TCGME} to solve the resulting inner least squares problems, in order to guarantee that the iterative computed regularized solution and exact computed regularized solutions have the same accuracy. Numerical experiments illustrate that the best regularized solution by {hyb-TCGME} is as accurate as that by JBDQR which is a joint bidiagonalization based algorithm, however, the best regularized solution by {hyb-CGME} is a little less accurate than that by hyb-TCGME and JBDQR. Moreover, from the numerical experiments we can observe that if the rank-$k$ approximation of the projection to $A$ is less accurate, the regularized solution by our hybrid algorithms is also less accurate, at least numerically.

Thanks. We have received your report. If we find this content to be in
violation of our guidelines,
we will remove it.

Ok