Namespaces
Variants
Actions

Difference between revisions of "Iteration algorithm"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (tex encoded by computer)
Line 1: Line 1:
A recursive algorithm realizing a sequence of point-to-set mappings <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i053/i053010/i0530101.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i053/i053010/i0530102.png" /> is a topological space, that can be used to compute for an initial point <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i053/i053010/i0530103.png" /> a sequence of points <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i053/i053010/i0530104.png" /> by the formulas
+
<!--
 +
i0530101.png
 +
$#A+1 = 38 n = 0
 +
$#C+1 = 38 : ~/encyclopedia/old_files/data/I053/I.0503010 Iteration algorithm
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i053/i053010/i0530105.png" /></td> <td valign="top" style="width:5%;text-align:right;">(1)</td></tr></table>
+
{{TEX|auto}}
 +
{{TEX|done}}
  
The operation (1) is called an iteration, while the sequence <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i053/i053010/i0530106.png" /> is called an iterative sequence.
+
A recursive algorithm realizing a sequence of point-to-set mappings  $  A _ {k} :  V \rightarrow V $,
 +
where  $  V $
 +
is a topological space, that can be used to compute for an initial point  $  u  ^ {0} \in V $
 +
a sequence of points  $  u  ^ {k} \in V $
 +
by the formulas
 +
 
 +
$$ \tag{1 }
 +
u  ^ {k+} 1  =  A _ {k} u  ^ {k} ,\ \
 +
k = 0 , 1 ,\dots .
 +
$$
 +
 
 +
The operation (1) is called an iteration, while the sequence $  \{ u  ^ {k} \} $
 +
is called an iterative sequence.
  
 
Iteration methods (or methods of iterative approximation) are used both for finding a solution to an operator equation
 
Iteration methods (or methods of iterative approximation) are used both for finding a solution to an operator equation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i053/i053010/i0530107.png" /></td> <td valign="top" style="width:5%;text-align:right;">(2)</td></tr></table>
+
$$ \tag{2 }
 +
A u  = f ,
 +
$$
  
a minimum of a functional, eigen values and eigen vectors of an equation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i053/i053010/i0530108.png" />, etc., as well as for proving the existence of solutions to these problems. An iteration method (1) is called convergent for the initial approximation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i053/i053010/i0530109.png" /> to a solution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i053/i053010/i05301010.png" /> of a problem considered if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i053/i053010/i05301011.png" /> as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i053/i053010/i05301012.png" />.
+
a minimum of a functional, eigen values and eigen vectors of an equation $  A u = \lambda u $,  
 +
etc., as well as for proving the existence of solutions to these problems. An iteration method (1) is called convergent for the initial approximation $  u  ^ {0} $
 +
to a solution $  u $
 +
of a problem considered if $  u  ^ {k} \rightarrow u $
 +
as $  k \rightarrow \infty $.
  
Operators <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i053/i053010/i05301013.png" /> for solving (2), given on a metric linear space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i053/i053010/i05301014.png" />, are usually constructed by the formulas
+
Operators $  A _ {k} $
 +
for solving (2), given on a metric linear space $  V $,  
 +
are usually constructed by the formulas
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i053/i053010/i05301015.png" /></td> <td valign="top" style="width:5%;text-align:right;">(3)</td></tr></table>
+
$$ \tag{3 }
 +
A _ {k} u  ^ {k}  = u  ^ {k} H _ {k} ( A u  ^ {k} - ) ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i053/i053010/i05301016.png" /> is some sequence of operators determined by the type of the iteration method. The [[Contracting-mapping principle|contracting-mapping principle]] and its generalizations, or variational minimization methods for a functional related to the problem, lie at the basis of constructing iteration methods of type (1), (3). Various methods for constructing <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i053/i053010/i05301017.png" /> are used, e.g. variants of the [[Newton method|Newton method]] or method of descent (cf. [[Descent, method of|Descent, method of]]). One tries to choose the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i053/i053010/i05301018.png" /> so that a fast convergence <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i053/i053010/i05301019.png" /> is ensured under the conditions that the numerical realization of the operations <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i053/i053010/i05301020.png" />, for a given amount of computer memory, is sufficiently simple, has as low complexity as possible and is numerically stable. Iteration methods for solving linear problems have been well-developed and were well-studied. The iteration methods are divided here into linear and non-linear ones. The [[Gauss method|Gauss method]], the [[Seidel method|Seidel method]], the successive overrelaxation method (cf. [[Relaxation method|Relaxation method]]), and iteration methods with Chebyshev parameters belong to the linear methods; variational methods belong to the non-linear methods (e.g. the methods of steepest descent and conjugate gradients, the [[Minimal discrepancy method|minimal discrepancy method]], etc., cf. [[Steepest descent, method of|Steepest descent, method of]]; [[Conjugate gradients, method of|Conjugate gradients, method of]]). One of the efficient iteration methods is the method using Chebyshev parameters, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i053/i053010/i05301021.png" /> is a self-adjoint operator with spectrum on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i053/i053010/i05301022.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i053/i053010/i05301023.png" />. This method provides an optimum (for given information on the boundaries of the spectrum) estimate of the convergence at a pre-assigned <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i053/i053010/i05301024.png" />-th step. The method can be written in the form
+
where $  \{ {H _ {k} } : {V \rightarrow V } \} $
 +
is some sequence of operators determined by the type of the iteration method. The [[Contracting-mapping principle|contracting-mapping principle]] and its generalizations, or variational minimization methods for a functional related to the problem, lie at the basis of constructing iteration methods of type (1), (3). Various methods for constructing $  A _ {k} $
 +
are used, e.g. variants of the [[Newton method|Newton method]] or method of descent (cf. [[Descent, method of|Descent, method of]]). One tries to choose the $  H _ {k} $
 +
so that a fast convergence $  u  ^ {k} \rightarrow u $
 +
is ensured under the conditions that the numerical realization of the operations $  A _ {k} u  ^ {k} $,  
 +
for a given amount of computer memory, is sufficiently simple, has as low complexity as possible and is numerically stable. Iteration methods for solving linear problems have been well-developed and were well-studied. The iteration methods are divided here into linear and non-linear ones. The [[Gauss method|Gauss method]], the [[Seidel method|Seidel method]], the successive overrelaxation method (cf. [[Relaxation method|Relaxation method]]), and iteration methods with Chebyshev parameters belong to the linear methods; variational methods belong to the non-linear methods (e.g. the methods of steepest descent and conjugate gradients, the [[Minimal discrepancy method|minimal discrepancy method]], etc., cf. [[Steepest descent, method of|Steepest descent, method of]]; [[Conjugate gradients, method of|Conjugate gradients, method of]]). One of the efficient iteration methods is the method using Chebyshev parameters, where $  A $
 +
is a self-adjoint operator with spectrum on $  [ m , M ] $,
 +
$  M > m > 0 $.  
 +
This method provides an optimum (for given information on the boundaries of the spectrum) estimate of the convergence at a pre-assigned $  N $-
 +
th step. The method can be written in the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i053/i053010/i05301025.png" /></td> </tr></table>
+
$$
 +
u  ^ {k+} 1  = u - \alpha _ {k+} 1 ( A u  ^ {k} - f  ) ,\ \
 +
k = 0 \dots N - 1 ,
 +
$$
  
 
where
 
where
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i053/i053010/i05301026.png" /></td> </tr></table>
+
$$
 +
\alpha _ {k+} 1  = 2
 +
\left (
 +
M + m - ( M - m ) \
 +
\cos 
 +
\frac{2 j _ {k} - 1 }{2 N }
  
and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i053/i053010/i05301027.png" /> is the expected number of iterations, and one uses in it a special permutation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i053/i053010/i05301028.png" /> of order <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i053/i053010/i05301029.png" /> that mixes well for the stability of the roots of the Chebyshev polynomials. For <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i053/i053010/i05301030.png" /> the permutation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i053/i053010/i05301031.png" /> can, e.g., be constructed as follows: <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i053/i053010/i05301032.png" />, and if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i053/i053010/i05301033.png" /> has already been constructed, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i053/i053010/i05301034.png" />. For <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i053/i053010/i05301035.png" /> it has the form (1, 16, 8, 9, 4, 13, 5, 12, 2, 15, 7, 10, 3, 14, 6, 11).
+
\right )  ^ {-} 1
 +
$$
  
There exist iteration methods using <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i053/i053010/i05301036.png" /> previous approximations <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i053/i053010/i05301037.png" />. They are called <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i053/i053010/i05301038.png" />-step methods and have an increased rate of convergence.
+
and  $  N $
 +
is the expected number of iterations, and one uses in it a special permutation  $  \kappa _ {N} = ( j _ {1} \dots j _ {N} ) $
 +
of order  $  N $
 +
that mixes well for the stability of the roots of the Chebyshev polynomials. For  $  N = 2  ^ {n} $
 +
the permutation  $  \kappa _ {N} $
 +
can, e.g., be constructed as follows:  $  \kappa _ {2} = ( 1 , 2 ) $,
 +
and if  $  \kappa _ {2  ^ {i-}  1 } = ( j _ {1} \dots j _ {2  ^ {i-}  1 } ) $
 +
has already been constructed, then  $  \kappa _ {2  ^ {i}  } = ( j _ {1} , 2  ^ {i} + 1 - j _ {1} , j _ {2} , 2  ^ {i} + 1- j _ {2} ,\dots ) $.
 +
For  $  N = 16 $
 +
it has the form (1, 16, 8, 9, 4, 13, 5, 12, 2, 15, 7, 10, 3, 14, 6, 11).
 +
 
 +
There exist iteration methods using  $  r $
 +
previous approximations  $  u  ^ {k} \dots u  ^ {k-} r+ 1 $.  
 +
They are called $  r $-
 +
step methods and have an increased rate of convergence.
  
 
Iteration methods are extensively used in solving multi-dimensional problems in mathematical physics, and for some classes of problems there exist special fast-converging iteration methods. Examples are: the method of variable directions, the methods explained in [[#References|[7]]] for elliptic boundary-initial value problems, and some methods for the problem of particle transfer or radiation (cf. [[#References|[1]]]–[[#References|[7]]]; [[Variable-directions method|Variable-directions method]]).
 
Iteration methods are extensively used in solving multi-dimensional problems in mathematical physics, and for some classes of problems there exist special fast-converging iteration methods. Examples are: the method of variable directions, the methods explained in [[#References|[7]]] for elliptic boundary-initial value problems, and some methods for the problem of particle transfer or radiation (cf. [[#References|[1]]]–[[#References|[7]]]; [[Variable-directions method|Variable-directions method]]).
Line 31: Line 93:
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  L.V. Kantorovich,  G.P. Akilov,  "Functionalanalysis in normierten Räumen" , Akademie Verlag  (1964)  (Translated from Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  L. Collatz,  "Funktionalanalysis und numerische Mathematik" , Springer  (1964)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  G.I. Marchuk,  V.I. Lebedev,  "Numerical methods in the theory of neutron transport" , Harwood  (1986)  (Translated from Russian)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  N.S. Bakhvalov,  "Numerical methods: analysis, algebra, ordinary differential equations" , MIR  (1977)  (Translated from Russian)</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top">  M.A. Krasnosel'skii,  G.M. Vainikko,  P.P. Zabreiko,  et al.,  "Approximate solution of operator equations" , Wolters-Noordhoff  (1972)  (Translated from Russian)</TD></TR><TR><TD valign="top">[6]</TD> <TD valign="top">  V.I. Lebedev,  "Optimization in iteration methods" , ''Proc. 2nd Math. School Optim. Organ. Comp. 1971'' , Inst. Kibernet. Akad. Nauk SSSR  (1972)  pp. 109–135  (In Russian)</TD></TR><TR><TD valign="top">[7]</TD> <TD valign="top">  R.P. Fedorenko,  "Iterative methods for elliptic difference equations"  ''Russian Math. Surveys'' , '''2'''  (1973)  pp. 129–195  ''Uspekhi Mat. Nauk'' , '''28'''  (1973)  pp. 121–182</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  L.V. Kantorovich,  G.P. Akilov,  "Functionalanalysis in normierten Räumen" , Akademie Verlag  (1964)  (Translated from Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  L. Collatz,  "Funktionalanalysis und numerische Mathematik" , Springer  (1964)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  G.I. Marchuk,  V.I. Lebedev,  "Numerical methods in the theory of neutron transport" , Harwood  (1986)  (Translated from Russian)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  N.S. Bakhvalov,  "Numerical methods: analysis, algebra, ordinary differential equations" , MIR  (1977)  (Translated from Russian)</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top">  M.A. Krasnosel'skii,  G.M. Vainikko,  P.P. Zabreiko,  et al.,  "Approximate solution of operator equations" , Wolters-Noordhoff  (1972)  (Translated from Russian)</TD></TR><TR><TD valign="top">[6]</TD> <TD valign="top">  V.I. Lebedev,  "Optimization in iteration methods" , ''Proc. 2nd Math. School Optim. Organ. Comp. 1971'' , Inst. Kibernet. Akad. Nauk SSSR  (1972)  pp. 109–135  (In Russian)</TD></TR><TR><TD valign="top">[7]</TD> <TD valign="top">  R.P. Fedorenko,  "Iterative methods for elliptic difference equations"  ''Russian Math. Surveys'' , '''2'''  (1973)  pp. 129–195  ''Uspekhi Mat. Nauk'' , '''28'''  (1973)  pp. 121–182</TD></TR></table>
 
 
  
 
====Comments====
 
====Comments====

Revision as of 22:13, 5 June 2020


A recursive algorithm realizing a sequence of point-to-set mappings $ A _ {k} : V \rightarrow V $, where $ V $ is a topological space, that can be used to compute for an initial point $ u ^ {0} \in V $ a sequence of points $ u ^ {k} \in V $ by the formulas

$$ \tag{1 } u ^ {k+} 1 = A _ {k} u ^ {k} ,\ \ k = 0 , 1 ,\dots . $$

The operation (1) is called an iteration, while the sequence $ \{ u ^ {k} \} $ is called an iterative sequence.

Iteration methods (or methods of iterative approximation) are used both for finding a solution to an operator equation

$$ \tag{2 } A u = f , $$

a minimum of a functional, eigen values and eigen vectors of an equation $ A u = \lambda u $, etc., as well as for proving the existence of solutions to these problems. An iteration method (1) is called convergent for the initial approximation $ u ^ {0} $ to a solution $ u $ of a problem considered if $ u ^ {k} \rightarrow u $ as $ k \rightarrow \infty $.

Operators $ A _ {k} $ for solving (2), given on a metric linear space $ V $, are usually constructed by the formulas

$$ \tag{3 } A _ {k} u ^ {k} = u ^ {k} H _ {k} ( A u ^ {k} - f ) , $$

where $ \{ {H _ {k} } : {V \rightarrow V } \} $ is some sequence of operators determined by the type of the iteration method. The contracting-mapping principle and its generalizations, or variational minimization methods for a functional related to the problem, lie at the basis of constructing iteration methods of type (1), (3). Various methods for constructing $ A _ {k} $ are used, e.g. variants of the Newton method or method of descent (cf. Descent, method of). One tries to choose the $ H _ {k} $ so that a fast convergence $ u ^ {k} \rightarrow u $ is ensured under the conditions that the numerical realization of the operations $ A _ {k} u ^ {k} $, for a given amount of computer memory, is sufficiently simple, has as low complexity as possible and is numerically stable. Iteration methods for solving linear problems have been well-developed and were well-studied. The iteration methods are divided here into linear and non-linear ones. The Gauss method, the Seidel method, the successive overrelaxation method (cf. Relaxation method), and iteration methods with Chebyshev parameters belong to the linear methods; variational methods belong to the non-linear methods (e.g. the methods of steepest descent and conjugate gradients, the minimal discrepancy method, etc., cf. Steepest descent, method of; Conjugate gradients, method of). One of the efficient iteration methods is the method using Chebyshev parameters, where $ A $ is a self-adjoint operator with spectrum on $ [ m , M ] $, $ M > m > 0 $. This method provides an optimum (for given information on the boundaries of the spectrum) estimate of the convergence at a pre-assigned $ N $- th step. The method can be written in the form

$$ u ^ {k+} 1 = u - \alpha _ {k+} 1 ( A u ^ {k} - f ) ,\ \ k = 0 \dots N - 1 , $$

where

$$ \alpha _ {k+} 1 = 2 \left ( M + m - ( M - m ) \ \cos \frac{2 j _ {k} - 1 }{2 N } \right ) ^ {-} 1 $$

and $ N $ is the expected number of iterations, and one uses in it a special permutation $ \kappa _ {N} = ( j _ {1} \dots j _ {N} ) $ of order $ N $ that mixes well for the stability of the roots of the Chebyshev polynomials. For $ N = 2 ^ {n} $ the permutation $ \kappa _ {N} $ can, e.g., be constructed as follows: $ \kappa _ {2} = ( 1 , 2 ) $, and if $ \kappa _ {2 ^ {i-} 1 } = ( j _ {1} \dots j _ {2 ^ {i-} 1 } ) $ has already been constructed, then $ \kappa _ {2 ^ {i} } = ( j _ {1} , 2 ^ {i} + 1 - j _ {1} , j _ {2} , 2 ^ {i} + 1- j _ {2} ,\dots ) $. For $ N = 16 $ it has the form (1, 16, 8, 9, 4, 13, 5, 12, 2, 15, 7, 10, 3, 14, 6, 11).

There exist iteration methods using $ r $ previous approximations $ u ^ {k} \dots u ^ {k-} r+ 1 $. They are called $ r $- step methods and have an increased rate of convergence.

Iteration methods are extensively used in solving multi-dimensional problems in mathematical physics, and for some classes of problems there exist special fast-converging iteration methods. Examples are: the method of variable directions, the methods explained in [7] for elliptic boundary-initial value problems, and some methods for the problem of particle transfer or radiation (cf. [1][7]; Variable-directions method).

References

[1] L.V. Kantorovich, G.P. Akilov, "Functionalanalysis in normierten Räumen" , Akademie Verlag (1964) (Translated from Russian)
[2] L. Collatz, "Funktionalanalysis und numerische Mathematik" , Springer (1964)
[3] G.I. Marchuk, V.I. Lebedev, "Numerical methods in the theory of neutron transport" , Harwood (1986) (Translated from Russian)
[4] N.S. Bakhvalov, "Numerical methods: analysis, algebra, ordinary differential equations" , MIR (1977) (Translated from Russian)
[5] M.A. Krasnosel'skii, G.M. Vainikko, P.P. Zabreiko, et al., "Approximate solution of operator equations" , Wolters-Noordhoff (1972) (Translated from Russian)
[6] V.I. Lebedev, "Optimization in iteration methods" , Proc. 2nd Math. School Optim. Organ. Comp. 1971 , Inst. Kibernet. Akad. Nauk SSSR (1972) pp. 109–135 (In Russian)
[7] R.P. Fedorenko, "Iterative methods for elliptic difference equations" Russian Math. Surveys , 2 (1973) pp. 129–195 Uspekhi Mat. Nauk , 28 (1973) pp. 121–182

Comments

See also the editorial comments to Chebyshev iteration method.

References

[a1] D.M. Young, "Iterative solution of large linear systems" , Acad. Press (1971)
[a2] A. George, J.W.-H. Liu, "Computer solution of large sparse positive definite systems" , Prentice-Hall (1981)
[a3] J.E., jr. Dennis, R. Schnable, "Least change secant updates for quasi-Newton methods" SIAM Review , 21 (1979) pp. 443–459
[a4] J.M. Ortega, W.C. Rheinboldt, "Iterative solution of non-linear equations in several variables" , Acad. Press (1970)
[a5] W.C. Rheinboldt, "Methods for solving systems of nonlinear equations" , SIAM (1970)
[a6] J.F. Traub, "Iterative methods for the solution of equations" , Prentice-Hall (1964)
[a7] R. Wait, "The numerical solution of algebraic equations" , Wiley (1979)
[a8] E. Wasserstrom, "Numerical solution by the continuation method" SIAM Review , 15 (1973) pp. 89–119
[a9] R.S. Varga, "Matrix iterative analysis" , Prentice-Hall (1962)
How to Cite This Entry:
Iteration algorithm. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Iteration_algorithm&oldid=47449
This article was adapted from an original article by V.I. Lebedev (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article