Namespaces
Variants
Actions

Difference between revisions of "Störmer method"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
(latex details)
 
(3 intermediate revisions by one other user not shown)
Line 1: Line 1:
 +
<!--
 +
s0903901.png
 +
$#A+1 = 35 n = 0
 +
$#C+1 = 35 : ~/encyclopedia/old_files/data/S090/S.0900390 St\AGormer method
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
A finite-difference method for finding a solution to the [[Cauchy problem|Cauchy problem]] for a system of second-order ordinary differential equations not containing the first derivative of the unknown function:
 
A finite-difference method for finding a solution to the [[Cauchy problem|Cauchy problem]] for a system of second-order ordinary differential equations not containing the first derivative of the unknown function:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090390/s0903901.png" /></td> </tr></table>
+
$$
 +
y  ^ {\prime\prime}  = f( x, y),\ \
 +
y( x _ {0} )  = y _ {0} ,\ \
 +
y  ^  \prime  ( x _ {0} )  = y _ {0}  ^  \prime  .
 +
$$
  
Integrating over a grid with a constant step <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090390/s0903902.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090390/s0903903.png" /> gives the following computational formulas:
+
Integrating over a grid with a constant step $  x _ {n} = x _ {0} + nh $,
 +
$  n = 1, 2 \dots $
 +
gives the following computational formulas:
  
 
a) extrapolation:
 
a) extrapolation:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090390/s0903904.png" /></td> </tr></table>
+
$$
 +
y _ {n+1} - 2 {y _ {n} } + y _ {n-1} = h  ^ {2} \sum _ {\lambda = 0 } ^ { k }  u _ {- \lambda }  f _ {n- \lambda }  ,\ \
 +
f _ {n}  = f( x _ {n} , y _ {n} ),
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090390/s0903905.png" /></td> </tr></table>
+
$$
 +
= 0, 1 \dots
 +
$$
  
 
or (in difference form)
 
or (in difference form)
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090390/s0903906.png" /></td> </tr></table>
+
$$
 +
y _ {n+1} - 2 {y _ {n} } + y _ {n-1}  = h  ^ {2} \sum_{p=0}^ { k }  \beta _ {p} \nabla  ^ {p} f _ {n} ,
 +
$$
  
 
where
 
where
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090390/s0903907.png" /></td> </tr></table>
+
$$
 +
\nabla  ^ {p} f _ {n}  = \nabla ( \nabla  ^ {p-1} f _ {n} )  = \nabla  ^ {p-1} f _ {n} - \nabla  ^ {p-1} f _ {n-1} ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090390/s0903908.png" /></td> </tr></table>
+
$$
 +
\beta _ {p}  =
 +
\frac{1}{p!}
 +
\left ( \int\limits _ { 0 } ^ { 1 }  ( 1- t) t( t+ 1) \dots ( t+( p- 1))  dt\right . +
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090390/s0903909.png" /></td> </tr></table>
+
$$
 +
+ \left .
 +
\int\limits _ { 0 } ^ { - 1} (- 1- t) t \dots ( t+( p- 1))  dt \right ) ,\  p = 0 \dots k,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090390/s09039010.png" /></td> </tr></table>
+
$$
 +
u _ {- \lambda }  = \sum _ {p= \lambda } ^ { k }
 +
\left ( \begin{array}{l}
 +
p \\
 +
\lambda
 +
\end{array}
 +
\right ) \beta _ {p} ,\  \lambda = 0 \dots k;
 +
$$
  
 
b) interpolation:
 
b) interpolation:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090390/s09039011.png" /></td> </tr></table>
+
$$
 +
y _ {n+1} - 2y _ {n} + y _ {n-1}  = h  ^ {2} \sum _ {\lambda = - 1 } ^ { k }  v _ {- \lambda }  f _ {n- \lambda }
 +
$$
  
 
or (in difference form)
 
or (in difference form)
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090390/s09039012.png" /></td> </tr></table>
+
$$
 +
y _ {n+1} - 2y _ {n} + y _ {n-1}  = h _ {2} \sum_{p=0}^ { k }  \gamma _ {p} \nabla  ^ {p} f _ {n+1} ,
 +
$$
  
 
where
 
where
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090390/s09039013.png" /></td> </tr></table>
+
$$
 +
\gamma _ {p}  =
 +
\frac{1}{p!}
 +
\left ( \int\limits _ { 0 } ^ { 1 }  ( 1- t)( t- 1) t \dots ( t+( p- 2))  dt\right . +
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090390/s09039014.png" /></td> </tr></table>
+
$$
 +
+ \left .
 +
\int\limits _ { 0 } ^ { -1} (- 1- t)( t- 1) t \dots ( t+( p- 2))  dt \right ) ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090390/s09039015.png" /></td> </tr></table>
+
$$
 +
v _ {- \lambda }  = \sum _ {p= \lambda } ^ {k+1}
 +
\left ( \begin{array}{l}
 +
p \\
 +
\lambda
 +
\end{array}
 +
\right ) \gamma _ {p} ,\  \lambda = - 1 , 0 \dots k.
 +
$$
  
The first values of the coefficients <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090390/s09039016.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090390/s09039017.png" /> are:
+
The first values of the coefficients $  \beta _ {p} $
 +
and $  \gamma _ {p} $
 +
are:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090390/s09039018.png" /></td> </tr></table>
+
$$
 +
\beta _ {0}  = 1 ,\
 +
\beta _ {1}  = 0,\
 +
\beta _ {2}  = \beta _ {3}  =
 +
\frac{1}{12}
 +
,\
 +
\beta _ {4}  =
 +
\frac{19}{240}
 +
,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090390/s09039019.png" /></td> </tr></table>
+
$$
 +
\beta _ {5}  =
 +
\frac{3}{40}
 +
, \beta _ {6}  =
 +
\frac{863}{12096}
 +
;
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090390/s09039020.png" /></td> </tr></table>
+
$$
 +
\gamma _ {0}  = - \gamma _ {1}  = 1, \gamma _ {2}  =
 +
\frac{1}{12}
 +
, \gamma _ {3}  = 0,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090390/s09039021.png" /></td> </tr></table>
+
$$
 +
\gamma _ {4}  = \gamma _ {5}  = -
 +
\frac{1}{240}
 +
, \gamma _ {6}  = -  
 +
\frac{221}{60480}
 +
.
 +
$$
  
For one and the same <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090390/s09039022.png" />, formula b) is more accurate, but requires the solution of a non-linear system of equations to obtain the value <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090390/s09039023.png" />. In practice, one first obtains an approximation for the solution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090390/s09039024.png" /> by formula a), and then makes it more precise by applying the formula
+
For one and the same $  k $,  
 +
formula b) is more accurate, but requires the solution of a non-linear system of equations to obtain the value $  y _ {n+1} $.  
 +
In practice, one first obtains an approximation for the solution $  y _ {n+1} $
 +
by formula a), and then makes it more precise by applying the formula
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090390/s09039025.png" /></td> </tr></table>
+
$$
 +
y _ {n+1}  ^ {(i+1)} - 2y _ {n} + y _ {n-1} =
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090390/s09039026.png" /></td> </tr></table>
+
$$
 +
= \
 +
h  ^ {2} \left ( v _ {1} f _ {n+1} ^ { ( i) } + \sum _ {\lambda
 +
= 0 } ^ { k }  v _ {- \lambda }  f _ {n - \lambda }  \right ) ,\  i = 0, 1, 2,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090390/s09039027.png" /></td> </tr></table>
+
$$
 +
f _ {n+1} ^ { ( i) }  = f \left ( x _ {n+1} , y _ {n+1}
 +
^ {(} i) \right ) ,\  y _ {n+1}  = y _ {n+1}  ^ {(} 3) .
 +
$$
  
Application of Störmer's method is based on the assumption that the approximate values of the solution at the first <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090390/s09039028.png" /> points of the grid, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090390/s09039029.png" /> (supporting values), are already known. These values are either computed by the [[Runge–Kutta method|Runge–Kutta method]] or by using the Taylor expansion of the solution. The necessity to apply special formulas to compute the values at the beginning of the process and when changing the step of the grid over which the integration is carried out leads to essentially more complicated computer programs.
+
Application of Störmer's method is based on the assumption that the approximate values of the solution at the first $  k $
 +
points of the grid, $  y _ {0} \dots y _ {k} $(
 +
supporting values), are already known. These values are either computed by the [[Runge–Kutta method|Runge–Kutta method]] or by using the Taylor expansion of the solution. The necessity to apply special formulas to compute the values at the beginning of the process and when changing the step of the grid over which the integration is carried out leads to essentially more complicated computer programs.
  
Störmer's formulas with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090390/s09039030.png" /> terms on the right-hand side have an error of order <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090390/s09039031.png" />. The estimate of the error is similar to the corresponding estimate for the [[Adams method|Adams method]]. One can show that for any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090390/s09039032.png" /> there are stable formulas with an error of order <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090390/s09039033.png" />.
+
Störmer's formulas with $  k $
 +
terms on the right-hand side have an error of order $  O( h  ^ {k+1} ) $.  
 +
The estimate of the error is similar to the corresponding estimate for the [[Adams method|Adams method]]. One can show that for any $  k $
 +
there are stable formulas with an error of order $  O( h  ^ {k+1} ) $.
  
In practice, one usually uses formulas with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090390/s09039034.png" />. One of Störmer's interpolation methods, called the Numerov method, is widely used:
+
In practice, one usually uses formulas with $  k = 4, 5, 6 $.  
 +
One of Störmer's interpolation methods, called the Numerov method, is widely used:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090390/s09039035.png" /></td> </tr></table>
+
$$
 +
y _ {n+2} - 2 y _ {n+1} + y _ {n}  =
 +
\frac{h  ^ {2} }{12}
 +
( f _ {n+2} + 10 f _ {n+1} + f _ {n} ).
 +
$$
  
 
The method was introduced by C. Störmer in 1920.
 
The method was introduced by C. Störmer in 1920.
Line 71: Line 179:
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  N.S. Bakhvalov,  "Numerical methods: analysis, algebra, ordinary differential equations" , MIR  (1977)  (Translated from Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  J.D. Lambert,  "Computational methods in ordinary differential equations" , Wiley  (1973)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  S.G. Mikhlin,  Kh.L. Smolitskii,  "Approximate methods for solution of differential and integral equations" , American Elsevier  (1967)  (Translated from Russian)</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  N.S. Bakhvalov,  "Numerical methods: analysis, algebra, ordinary differential equations" , MIR  (1977)  (Translated from Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  J.D. Lambert,  "Computational methods in ordinary differential equations" , Wiley  (1973)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  S.G. Mikhlin,  Kh.L. Smolitskii,  "Approximate methods for solution of differential and integral equations" , American Elsevier  (1967)  (Translated from Russian)</TD></TR></table>
 
 
  
 
====Comments====
 
====Comments====
 
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  F.B. Hildebrand,  "Introduction to numerical analysis" , Dover, reprint  (1987)  pp. 275ff</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  C. Störmer,  "Méthode d'intégration numérique des équations différentielles ordinaires" , ''C.R. Congress Internat. Strassbourg 1920''  (1921)  pp. 243–257</TD></TR></table>
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  F.B. Hildebrand,  "Introduction to numerical analysis" , Dover, reprint  (1987)  pp. 275ff</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  C. Störmer,  "Méthode d'intégration numérique des équations différentielles ordinaires" , ''C.R. Congress Internat. Strassbourg 1920''  (1921)  pp. 243–257</TD></TR></table>

Latest revision as of 14:32, 13 January 2024


A finite-difference method for finding a solution to the Cauchy problem for a system of second-order ordinary differential equations not containing the first derivative of the unknown function:

$$ y ^ {\prime\prime} = f( x, y),\ \ y( x _ {0} ) = y _ {0} ,\ \ y ^ \prime ( x _ {0} ) = y _ {0} ^ \prime . $$

Integrating over a grid with a constant step $ x _ {n} = x _ {0} + nh $, $ n = 1, 2 \dots $ gives the following computational formulas:

a) extrapolation:

$$ y _ {n+1} - 2 {y _ {n} } + y _ {n-1} = h ^ {2} \sum _ {\lambda = 0 } ^ { k } u _ {- \lambda } f _ {n- \lambda } ,\ \ f _ {n} = f( x _ {n} , y _ {n} ), $$

$$ n = 0, 1 \dots $$

or (in difference form)

$$ y _ {n+1} - 2 {y _ {n} } + y _ {n-1} = h ^ {2} \sum_{p=0}^ { k } \beta _ {p} \nabla ^ {p} f _ {n} , $$

where

$$ \nabla ^ {p} f _ {n} = \nabla ( \nabla ^ {p-1} f _ {n} ) = \nabla ^ {p-1} f _ {n} - \nabla ^ {p-1} f _ {n-1} , $$

$$ \beta _ {p} = \frac{1}{p!} \left ( \int\limits _ { 0 } ^ { 1 } ( 1- t) t( t+ 1) \dots ( t+( p- 1)) dt\right . + $$

$$ + \left . \int\limits _ { 0 } ^ { - 1} (- 1- t) t \dots ( t+( p- 1)) dt \right ) ,\ p = 0 \dots k, $$

$$ u _ {- \lambda } = \sum _ {p= \lambda } ^ { k } \left ( \begin{array}{l} p \\ \lambda \end{array} \right ) \beta _ {p} ,\ \lambda = 0 \dots k; $$

b) interpolation:

$$ y _ {n+1} - 2y _ {n} + y _ {n-1} = h ^ {2} \sum _ {\lambda = - 1 } ^ { k } v _ {- \lambda } f _ {n- \lambda } $$

or (in difference form)

$$ y _ {n+1} - 2y _ {n} + y _ {n-1} = h _ {2} \sum_{p=0}^ { k } \gamma _ {p} \nabla ^ {p} f _ {n+1} , $$

where

$$ \gamma _ {p} = \frac{1}{p!} \left ( \int\limits _ { 0 } ^ { 1 } ( 1- t)( t- 1) t \dots ( t+( p- 2)) dt\right . + $$

$$ + \left . \int\limits _ { 0 } ^ { -1} (- 1- t)( t- 1) t \dots ( t+( p- 2)) dt \right ) , $$

$$ v _ {- \lambda } = \sum _ {p= \lambda } ^ {k+1} \left ( \begin{array}{l} p \\ \lambda \end{array} \right ) \gamma _ {p} ,\ \lambda = - 1 , 0 \dots k. $$

The first values of the coefficients $ \beta _ {p} $ and $ \gamma _ {p} $ are:

$$ \beta _ {0} = 1 ,\ \beta _ {1} = 0,\ \beta _ {2} = \beta _ {3} = \frac{1}{12} ,\ \beta _ {4} = \frac{19}{240} , $$

$$ \beta _ {5} = \frac{3}{40} , \beta _ {6} = \frac{863}{12096} ; $$

$$ \gamma _ {0} = - \gamma _ {1} = 1, \gamma _ {2} = \frac{1}{12} , \gamma _ {3} = 0, $$

$$ \gamma _ {4} = \gamma _ {5} = - \frac{1}{240} , \gamma _ {6} = - \frac{221}{60480} . $$

For one and the same $ k $, formula b) is more accurate, but requires the solution of a non-linear system of equations to obtain the value $ y _ {n+1} $. In practice, one first obtains an approximation for the solution $ y _ {n+1} $ by formula a), and then makes it more precise by applying the formula

$$ y _ {n+1} ^ {(i+1)} - 2y _ {n} + y _ {n-1} = $$

$$ = \ h ^ {2} \left ( v _ {1} f _ {n+1} ^ { ( i) } + \sum _ {\lambda = 0 } ^ { k } v _ {- \lambda } f _ {n - \lambda } \right ) ,\ i = 0, 1, 2, $$

$$ f _ {n+1} ^ { ( i) } = f \left ( x _ {n+1} , y _ {n+1} ^ {(} i) \right ) ,\ y _ {n+1} = y _ {n+1} ^ {(} 3) . $$

Application of Störmer's method is based on the assumption that the approximate values of the solution at the first $ k $ points of the grid, $ y _ {0} \dots y _ {k} $( supporting values), are already known. These values are either computed by the Runge–Kutta method or by using the Taylor expansion of the solution. The necessity to apply special formulas to compute the values at the beginning of the process and when changing the step of the grid over which the integration is carried out leads to essentially more complicated computer programs.

Störmer's formulas with $ k $ terms on the right-hand side have an error of order $ O( h ^ {k+1} ) $. The estimate of the error is similar to the corresponding estimate for the Adams method. One can show that for any $ k $ there are stable formulas with an error of order $ O( h ^ {k+1} ) $.

In practice, one usually uses formulas with $ k = 4, 5, 6 $. One of Störmer's interpolation methods, called the Numerov method, is widely used:

$$ y _ {n+2} - 2 y _ {n+1} + y _ {n} = \frac{h ^ {2} }{12} ( f _ {n+2} + 10 f _ {n+1} + f _ {n} ). $$

The method was introduced by C. Störmer in 1920.

References

[1] N.S. Bakhvalov, "Numerical methods: analysis, algebra, ordinary differential equations" , MIR (1977) (Translated from Russian)
[2] J.D. Lambert, "Computational methods in ordinary differential equations" , Wiley (1973)
[3] S.G. Mikhlin, Kh.L. Smolitskii, "Approximate methods for solution of differential and integral equations" , American Elsevier (1967) (Translated from Russian)

Comments

References

[a1] F.B. Hildebrand, "Introduction to numerical analysis" , Dover, reprint (1987) pp. 275ff
[a2] C. Störmer, "Méthode d'intégration numérique des équations différentielles ordinaires" , C.R. Congress Internat. Strassbourg 1920 (1921) pp. 243–257
How to Cite This Entry:
Störmer method. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=St%C3%B6rmer_method&oldid=17127
This article was adapted from an original article by S.S. Gaisaryan (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article