Namespaces
Variants
Actions

Difference between revisions of "Minimal discrepancy method"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (tex encoded by computer)
Line 1: Line 1:
 +
<!--
 +
m0638101.png
 +
$#A+1 = 18 n = 0
 +
$#C+1 = 18 : ~/encyclopedia/old_files/data/M063/M.0603810 Minimal discrepancy method,
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
''minimal residual method''
 
''minimal residual method''
  
 
An iteration method for the solution of an operator equation
 
An iteration method for the solution of an operator equation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063810/m0638101.png" /></td> <td valign="top" style="width:5%;text-align:right;">(1)</td></tr></table>
+
$$ \tag{1 }
 +
A u  = f
 +
$$
  
with a self-adjoint positive-definite bounded operator <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063810/m0638102.png" /> acting in a Hilbert space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063810/m0638103.png" />, and with a given element <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063810/m0638104.png" />. The formula for the minimal discrepancy method takes the form
+
with a self-adjoint positive-definite bounded operator $  A $
 +
acting in a Hilbert space $  H $,  
 +
and with a given element $  f \in H $.  
 +
The formula for the minimal discrepancy method takes the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063810/m0638105.png" /></td> <td valign="top" style="width:5%;text-align:right;">(2)</td></tr></table>
+
$$ \tag{2 }
 +
u  ^ {k+} 1  = u  ^ {k} - \alpha _ {k} ( A u  ^ {k} - f  ) ,\ \
 +
k = 0 , 1 \dots
 +
$$
  
 
where the parameter
 
where the parameter
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063810/m0638106.png" /></td> <td valign="top" style="width:5%;text-align:right;">(3)</td></tr></table>
+
$$ \tag{3 }
 +
\alpha _ {k}  = \
 +
 
 +
\frac{( A \xi  ^ {k} , \xi  ^ {k} ) }{( A \xi  ^ {k} , A \xi  ^ {k} ) }
 +
 
 +
$$
  
is chosen in each step <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063810/m0638107.png" /> from the condition for maximal minimization of the norm of the discrepancy (or residual vector) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063810/m0638108.png" />; that is, it is required that
+
is chosen in each step $  k \geq  0 $
 +
from the condition for maximal minimization of the norm of the discrepancy (or residual vector) $  \xi  ^ {k+} 1 = A u  ^ {k+} 1 - f $;  
 +
that is, it is required that
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063810/m0638109.png" /></td> <td valign="top" style="width:5%;text-align:right;">(4)</td></tr></table>
+
$$ \tag{4 }
 +
\| \xi  ^ {k} - \alpha _ {k} A \xi  ^ {k} \|  = \
 +
\inf _  \alpha  \
 +
\| \xi  ^ {k} - \alpha A \xi  ^ {k} \| .
 +
$$
  
If the spectrum of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063810/m06381010.png" /> belongs to an interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063810/m06381011.png" /> on the real line, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063810/m06381012.png" /> are positive numbers, then the sequence of approximations <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063810/m06381013.png" /> in the method (2)–(3) converges to a solution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063810/m06381014.png" /> of (1) with the speed of a geometric progression with multiplier <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063810/m06381015.png" />.
+
If the spectrum of $  A $
 +
belongs to an interval $  [ m , M ] $
 +
on the real line, where m \leq  M $
 +
are positive numbers, then the sequence of approximations $  \{ u  ^ {k} \} $
 +
in the method (2)–(3) converges to a solution $  u  ^ {*} $
 +
of (1) with the speed of a geometric progression with multiplier $  q = ( M - m ) / ( M + m ) < 1 $.
  
Different ways of defining the scalar product in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063810/m06381016.png" /> lead to different iteration methods. In particular, for special scalar products the formulas for the minimal discrepancy method coincide with the formulas for the method of steepest descent (cf. [[Steepest descent, method of|Steepest descent, method of]]) and the method of minimal errors (see [[#References|[2]]]).
+
Different ways of defining the scalar product in $  H $
 +
lead to different iteration methods. In particular, for special scalar products the formulas for the minimal discrepancy method coincide with the formulas for the method of steepest descent (cf. [[Steepest descent, method of|Steepest descent, method of]]) and the method of minimal errors (see [[#References|[2]]]).
  
The condition for the convergence of the minimal discrepancy method may be weakened in comparison with that given above if it is considered on certain subsets of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063810/m06381017.png" />. For example, if the minimal discrepancy method is considered only in real spaces, then it is possible to drop the requirement that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063810/m06381018.png" /> be self-adjoint (see [[#References|[3]]]–[[#References|[5]]]).
+
The condition for the convergence of the minimal discrepancy method may be weakened in comparison with that given above if it is considered on certain subsets of $  H $.  
 +
For example, if the minimal discrepancy method is considered only in real spaces, then it is possible to drop the requirement that $  A $
 +
be self-adjoint (see [[#References|[3]]]–[[#References|[5]]]).
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  M.A. Krasnosel'skii,  S.G. Krein,  "An iteration process with minimal residuals"  ''Mat. Sb.'' , '''31'''  (1952)  pp. 315–334  (In Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  M.A. Krasnosel'skii,  G.M. Vainikko,  P.P. Zabreiko,  et al.,  "Approximate solution of operator equations" , Wolters-Noordhoff  (1972)  (Translated from Russian)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  A.A. Samarskii,  "Theorie der Differenzverfahren" , Akad. Verlagsgesell. Geest u. Portig K.-D.  (1984)  (Translated from Russian)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  G.I. Marchuk,  Yu.A. Kuznetsov,  "Iterative methods and quadratic functionals" , Novosibirsk  (1972)  (In Russian)</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top">  G.I. Marchuk,  Yu.A. Kuznetsov,  "Sur les méthodes numériques en sciences physique et economique" , Dunod  (1974)  (Translated from Russian)</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  M.A. Krasnosel'skii,  S.G. Krein,  "An iteration process with minimal residuals"  ''Mat. Sb.'' , '''31'''  (1952)  pp. 315–334  (In Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  M.A. Krasnosel'skii,  G.M. Vainikko,  P.P. Zabreiko,  et al.,  "Approximate solution of operator equations" , Wolters-Noordhoff  (1972)  (Translated from Russian)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  A.A. Samarskii,  "Theorie der Differenzverfahren" , Akad. Verlagsgesell. Geest u. Portig K.-D.  (1984)  (Translated from Russian)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  G.I. Marchuk,  Yu.A. Kuznetsov,  "Iterative methods and quadratic functionals" , Novosibirsk  (1972)  (In Russian)</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top">  G.I. Marchuk,  Yu.A. Kuznetsov,  "Sur les méthodes numériques en sciences physique et economique" , Dunod  (1974)  (Translated from Russian)</TD></TR></table>

Revision as of 08:00, 6 June 2020


minimal residual method

An iteration method for the solution of an operator equation

$$ \tag{1 } A u = f $$

with a self-adjoint positive-definite bounded operator $ A $ acting in a Hilbert space $ H $, and with a given element $ f \in H $. The formula for the minimal discrepancy method takes the form

$$ \tag{2 } u ^ {k+} 1 = u ^ {k} - \alpha _ {k} ( A u ^ {k} - f ) ,\ \ k = 0 , 1 \dots $$

where the parameter

$$ \tag{3 } \alpha _ {k} = \ \frac{( A \xi ^ {k} , \xi ^ {k} ) }{( A \xi ^ {k} , A \xi ^ {k} ) } $$

is chosen in each step $ k \geq 0 $ from the condition for maximal minimization of the norm of the discrepancy (or residual vector) $ \xi ^ {k+} 1 = A u ^ {k+} 1 - f $; that is, it is required that

$$ \tag{4 } \| \xi ^ {k} - \alpha _ {k} A \xi ^ {k} \| = \ \inf _ \alpha \ \| \xi ^ {k} - \alpha A \xi ^ {k} \| . $$

If the spectrum of $ A $ belongs to an interval $ [ m , M ] $ on the real line, where $ m \leq M $ are positive numbers, then the sequence of approximations $ \{ u ^ {k} \} $ in the method (2)–(3) converges to a solution $ u ^ {*} $ of (1) with the speed of a geometric progression with multiplier $ q = ( M - m ) / ( M + m ) < 1 $.

Different ways of defining the scalar product in $ H $ lead to different iteration methods. In particular, for special scalar products the formulas for the minimal discrepancy method coincide with the formulas for the method of steepest descent (cf. Steepest descent, method of) and the method of minimal errors (see [2]).

The condition for the convergence of the minimal discrepancy method may be weakened in comparison with that given above if it is considered on certain subsets of $ H $. For example, if the minimal discrepancy method is considered only in real spaces, then it is possible to drop the requirement that $ A $ be self-adjoint (see [3][5]).

References

[1] M.A. Krasnosel'skii, S.G. Krein, "An iteration process with minimal residuals" Mat. Sb. , 31 (1952) pp. 315–334 (In Russian)
[2] M.A. Krasnosel'skii, G.M. Vainikko, P.P. Zabreiko, et al., "Approximate solution of operator equations" , Wolters-Noordhoff (1972) (Translated from Russian)
[3] A.A. Samarskii, "Theorie der Differenzverfahren" , Akad. Verlagsgesell. Geest u. Portig K.-D. (1984) (Translated from Russian)
[4] G.I. Marchuk, Yu.A. Kuznetsov, "Iterative methods and quadratic functionals" , Novosibirsk (1972) (In Russian)
[5] G.I. Marchuk, Yu.A. Kuznetsov, "Sur les méthodes numériques en sciences physique et economique" , Dunod (1974) (Translated from Russian)
How to Cite This Entry:
Minimal discrepancy method. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Minimal_discrepancy_method&oldid=19264
This article was adapted from an original article by Yu.A. Kuznetsov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article