Namespaces
Variants
Actions

Minimal discrepancy method

From Encyclopedia of Mathematics
Jump to: navigation, search
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.


minimal residual method

An iteration method for the solution of an operator equation

$$ \tag{1 } A u = f $$

with a self-adjoint positive-definite bounded operator $ A $ acting in a Hilbert space $ H $, and with a given element $ f \in H $. The formula for the minimal discrepancy method takes the form

$$ \tag{2 } u ^ {k+1} = u ^ {k} - \alpha _ {k} ( A u ^ {k} - f ) ,\ \ k = 0 , 1 \dots $$

where the parameter

$$ \tag{3 } \alpha _ {k} = \ \frac{( A \xi ^ {k} , \xi ^ {k} ) }{( A \xi ^ {k} , A \xi ^ {k} ) } $$

is chosen in each step $ k \geq 0 $ from the condition for maximal minimization of the norm of the discrepancy (or residual vector) $ \xi ^ {k+1} = A u ^ {k+1} - f $; that is, it is required that

$$ \tag{4 } \| \xi ^ {k} - \alpha _ {k} A \xi ^ {k} \| = \ \inf _ \alpha \ \| \xi ^ {k} - \alpha A \xi ^ {k} \| . $$

If the spectrum of $ A $ belongs to an interval $ [ m , M ] $ on the real line, where $ m \leq M $ are positive numbers, then the sequence of approximations $ \{ u ^ {k} \} $ in the method (2)–(3) converges to a solution $ u ^ {*} $ of (1) with the speed of a geometric progression with multiplier $ q = ( M - m ) / ( M + m ) < 1 $.

Different ways of defining the scalar product in $ H $ lead to different iteration methods. In particular, for special scalar products the formulas for the minimal discrepancy method coincide with the formulas for the method of steepest descent (cf. Steepest descent, method of) and the method of minimal errors (see [2]).

The condition for the convergence of the minimal discrepancy method may be weakened in comparison with that given above if it is considered on certain subsets of $ H $. For example, if the minimal discrepancy method is considered only in real spaces, then it is possible to drop the requirement that $ A $ be self-adjoint (see [3][5]).

References

[1] M.A. Krasnosel'skii, S.G. Krein, "An iteration process with minimal residuals" Mat. Sb. , 31 (1952) pp. 315–334 (In Russian)
[2] M.A. Krasnosel'skii, G.M. Vainikko, P.P. Zabreiko, et al., "Approximate solution of operator equations" , Wolters-Noordhoff (1972) (Translated from Russian)
[3] A.A. Samarskii, "Theorie der Differenzverfahren" , Akad. Verlagsgesell. Geest u. Portig K.-D. (1984) (Translated from Russian)
[4] G.I. Marchuk, Yu.A. Kuznetsov, "Iterative methods and quadratic functionals" , Novosibirsk (1972) (In Russian)
[5] G.I. Marchuk, Yu.A. Kuznetsov, "Sur les méthodes numériques en sciences physique et economique" , Dunod (1974) (Translated from Russian)
How to Cite This Entry:
Minimal discrepancy method. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Minimal_discrepancy_method&oldid=51048
This article was adapted from an original article by Yu.A. Kuznetsov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article