Namespaces
Variants
Actions

Difference between revisions of "Runge-Kutta method"

From Encyclopedia of Mathematics
Jump to: navigation, search
m (moved Runge–Kutta method to Runge-Kutta method: ascii title)
m (tex encoded by computer)
Line 1: Line 1:
 +
<!--
 +
r0828101.png
 +
$#A+1 = 52 n = 0
 +
$#C+1 = 52 : ~/encyclopedia/old_files/data/R082/R.0802810 Runge\ANDKutta method
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
A one-step method for numerically solving the [[Cauchy problem|Cauchy problem]] for a system of ordinary differential equations of the form
 
A one-step method for numerically solving the [[Cauchy problem|Cauchy problem]] for a system of ordinary differential equations of the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r082/r082810/r0828101.png" /></td> <td valign="top" style="width:5%;text-align:right;">(1)</td></tr></table>
+
$$ \tag{1 }
 +
u  ^  \prime  \ = f ( t , u ) .
 +
$$
  
 
The principal idea of the Runge–Kutta method was proposed by C. Runge [[#References|[1]]] and developed later by W. Kutta [[#References|[2]]] and others. Originally, this idea was used only for constructing explicit schemes of the method, which were sought in the form
 
The principal idea of the Runge–Kutta method was proposed by C. Runge [[#References|[1]]] and developed later by W. Kutta [[#References|[2]]] and others. Originally, this idea was used only for constructing explicit schemes of the method, which were sought in the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r082/r082810/r0828102.png" /></td> <td valign="top" style="width:5%;text-align:right;">(2)</td></tr></table>
+
$$ \tag{2 }
 +
y _ {j+} 1 \ = y _ {j} + \theta sum _ { i= } 1 ^ { q }  A _ {i} k _ {i} ,
 +
$$
  
 
where
 
where
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r082/r082810/r0828103.png" /></td> </tr></table>
+
$$
 +
y _ {j +\alpha }  \ \cong u ( t _ {j} + \alpha\theta ) ,\ \
 +
k _ {1} \ = f( t _ {j} , y _ {j} ) ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r082/r082810/r0828104.png" /></td> </tr></table>
+
$$
 +
k _ {n} \ = \left ( \theta _ {j} + \alpha _ {n} \theta , y _ {j} + \theta sum _ { m= } 1 ^ { n- }  1 \beta _ {nm} k _ {m} \right ) ,\ \
 +
n = 2 \dots q ,
 +
$$
  
and, moreover, the values of the constants <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r082/r082810/r0828105.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r082/r082810/r0828106.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r082/r082810/r0828107.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r082/r082810/r0828108.png" />; <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r082/r082810/r0828109.png" />; <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r082/r082810/r08281010.png" />, were defined from the requirement that the error of equation (2) on the exact solution of (1) should have the highest order (i.e. power of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r082/r082810/r08281011.png" />, the stepsize, in the leading term of the expansion).
+
and, moreover, the values of the constants $  A _ {i} $,  
 +
$  \alpha _ {n} $,  
 +
$  \beta _ {nm} $,  
 +
$  i = 1 \dots q $;  
 +
$  n = 2 \dots q $;  
 +
$  m = 1 \dots n - 1 $,  
 +
were defined from the requirement that the error of equation (2) on the exact solution of (1) should have the highest order (i.e. power of $  \theta $,  
 +
the stepsize, in the leading term of the expansion).
  
 
In contrast to multi-step methods, the Runge–Kutta method, as other one-step methods, only requires the value at the last time point of the approximate solution and allows one to carry out calculations under initial conditions which are natural for equation (1). This allows one to use it directly also for not equally-spaced grids. However, since this method does not make use of information concerning the solution at previous nodes of the grid, it is, in general, less economical than, e.g., the [[Adams method|Adams method]].
 
In contrast to multi-step methods, the Runge–Kutta method, as other one-step methods, only requires the value at the last time point of the approximate solution and allows one to carry out calculations under initial conditions which are natural for equation (1). This allows one to use it directly also for not equally-spaced grids. However, since this method does not make use of information concerning the solution at previous nodes of the grid, it is, in general, less economical than, e.g., the [[Adams method|Adams method]].
Line 19: Line 48:
 
The classical Runge–Kutta method (see, e.g., [[#References|[3]]]) is the method
 
The classical Runge–Kutta method (see, e.g., [[#References|[3]]]) is the method
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r082/r082810/r08281012.png" /></td> </tr></table>
+
$$
 +
y _ {j+} 1 \ = y _ {j} +
 +
{
 +
\frac{1}{6}
 +
} \theta( k _ {1} + 2 k _ {2} + 2 k _ {3} + k _ {4} ) ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r082/r082810/r08281013.png" /></td> </tr></table>
+
$$
 +
k _ {1} \ = f( t _ {j} , y _ {j} ) ,\  k _ {2} \ =\
 +
f \left ( t _ {j} + {
 +
\frac{1}{2}
 +
} \theta , y _ {j} + {
 +
\frac{1}{2}
 +
}
 +
\theta k _ {1} \right ) ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r082/r082810/r08281014.png" /></td> </tr></table>
+
$$
 +
k _ {3} \ = f \left ( t _ {j} + {
 +
\frac{1}{2}
 +
} \theta , y _ {j} +
 +
{
 +
\frac{1}{2}
 +
} \theta k _ {2} \right ) ,\ \
 +
k _ {4} \ = f( t _ {j} + \theta , y _ {j} + \theta k _ {3} ) ,
 +
$$
  
which belongs to the family of methods with fourth order of accuracy of the form (2) with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r082/r082810/r08281015.png" />, depending on two free parameters. The simplest explicit Runge–Kutta with first order of accuracy is obtained from (2) when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r082/r082810/r08281016.png" />; it is also the most widely used. This method is known as Euler's method. From (2) for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r082/r082810/r08281017.png" /> one can obtain families of Runge–Kutta methods with second and third order of accuracy depending on one and two free parameters, respectively. For <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r082/r082810/r08281018.png" /> the above correspondence between the value of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r082/r082810/r08281019.png" /> and the optimal order of accuracy is no more valid. Runge–Kutta methods of the form (2) with fifth order of accuracy can be realized only for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r082/r082810/r08281020.png" />, sixth order — for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r082/r082810/r08281021.png" />, seventh order — for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r082/r082810/r08281022.png" />, etc. In this case, if one increases <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r082/r082810/r08281023.png" /> by one, extension of the set of constants <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r082/r082810/r08281024.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r082/r082810/r08281025.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r082/r082810/r08281026.png" /> to be chosen in (2) is often insufficient to satisfy the conditions resulting from the requirement to increase the order of accuracy of an explicit Runge–Kutta method by one. To increase the number of parameters to be chosen in (2) one can consider, for example, the following extension of the construction of one-step methods based on the concept of Runge:
+
which belongs to the family of methods with fourth order of accuracy of the form (2) with $  q = 4 $,  
 +
depending on two free parameters. The simplest explicit Runge–Kutta with first order of accuracy is obtained from (2) when $  q = 1 $;  
 +
it is also the most widely used. This method is known as Euler's method. From (2) for $  q = 2, 3 $
 +
one can obtain families of Runge–Kutta methods with second and third order of accuracy depending on one and two free parameters, respectively. For $  q > 4 $
 +
the above correspondence between the value of $  q $
 +
and the optimal order of accuracy is no more valid. Runge–Kutta methods of the form (2) with fifth order of accuracy can be realized only for $  q = 6 $,  
 +
sixth order — for $  q = 7 $,  
 +
seventh order — for $  q = 9 $,  
 +
etc. In this case, if one increases $  q $
 +
by one, extension of the set of constants $  A _ {i} $,  
 +
$  \alpha _ {i} $,  
 +
$  \beta _ {nm} $
 +
to be chosen in (2) is often insufficient to satisfy the conditions resulting from the requirement to increase the order of accuracy of an explicit Runge–Kutta method by one. To increase the number of parameters to be chosen in (2) one can consider, for example, the following extension of the construction of one-step methods based on the concept of Runge:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r082/r082810/r08281027.png" /></td> <td valign="top" style="width:5%;text-align:right;">(3)</td></tr></table>
+
$$ \tag{3 }
 +
k _ {n} \ = f \left ( t _ {j} + \alpha _ {n} \theta , y _ {j} +
 +
\theta sum _ { m= } 1 ^ { q }  \beta _ {nm} k _ {m} \right ) ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r082/r082810/r08281028.png" /></td> </tr></table>
+
$$
 +
n \ = 1 \dots q .
 +
$$
  
In general, methods of the form (2), (3) are already implicit. This complicates significantly their numerical implementation: The values <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r082/r082810/r08281029.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r082/r082810/r08281030.png" />, at each step have to be found from the system of, generally speaking, non-linear equations (3). However, due to a considerable increase in the number of constants to be chosen, such methods acquire the following property (see [[#References|[4]]]): For any value of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r082/r082810/r08281031.png" /> there exists an implicit Runge–Kutta method of order of accuracy <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r082/r082810/r08281032.png" />. Moreover, under such extension of the class of Runge–Kutta methods there arise methods well adapted to stiff differential systems (cf. [[Stiff differential system|Stiff differential system]]).
+
In general, methods of the form (2), (3) are already implicit. This complicates significantly their numerical implementation: The values $  k _ {n} $,  
 +
$  n = 1 \dots q $,  
 +
at each step have to be found from the system of, generally speaking, non-linear equations (3). However, due to a considerable increase in the number of constants to be chosen, such methods acquire the following property (see [[#References|[4]]]): For any value of $  q $
 +
there exists an implicit Runge–Kutta method of order of accuracy $  2q $.  
 +
Moreover, under such extension of the class of Runge–Kutta methods there arise methods well adapted to stiff differential systems (cf. [[Stiff differential system|Stiff differential system]]).
  
 
There exists one more modification (see, e.g., [[#References|[5]]]) of Runge's concept of constructing one-step methods for numerically solving equations of the form (1). Namely, proceeding from (1) one writes
 
There exists one more modification (see, e.g., [[#References|[5]]]) of Runge's concept of constructing one-step methods for numerically solving equations of the form (1). Namely, proceeding from (1) one writes
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r082/r082810/r08281033.png" /></td> </tr></table>
+
$$
 +
u( t _ {j} + \theta) - u( t _ {j} ) \ = \theta
 +
int _ { 0 } ^ { 1 }  f
 +
( t _ {j} + \alpha \theta , u( t _ {j} + \alpha \theta))  d \alpha .
 +
$$
  
An approximation of the latter integral by a quadrature formula with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r082/r082810/r08281034.png" /> nodes gives
+
An approximation of the latter integral by a quadrature formula with $  q $
 +
nodes gives
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r082/r082810/r08281035.png" /></td> <td valign="top" style="width:5%;text-align:right;">(4)</td></tr></table>
+
$$ \tag{4 }
 +
u( t _ {j} + \theta) \ \cong u( t _ {j} ) + \theta sum
 +
_ { i= } 1 ^ { q }  A _ {i} f( t _ {j} + \alpha _ {i} \theta , u
 +
( t _ {j} + \alpha _ {i} \theta)) .
 +
$$
  
If the choice of nodes <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r082/r082810/r08281036.png" /> and coefficients <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r082/r082810/r08281037.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r082/r082810/r08281038.png" />, of the considered quadrature formula is submitted to the conditions
+
If the choice of nodes $  \alpha _ {i} $
 +
and coefficients $  A _ {i} $,  
 +
$  i = 1 \dots q $,  
 +
of the considered quadrature formula is submitted to the conditions
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r082/r082810/r08281039.png" /></td> <td valign="top" style="width:5%;text-align:right;">(5)</td></tr></table>
+
$$ \tag{5 }
 +
sum _ { i= } 1 ^ { q }  A _ {i} \ = 1 ,\ \
 +
sum _ { i= } 1 ^ { q }  A _ {i} \alpha _ {i}  ^ {n} \ =  
 +
\frac{1}{n+}
 +
1 ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r082/r082810/r08281040.png" /></td> </tr></table>
+
$$
 +
n \ = 1 \dots p - 1 ,
 +
$$
  
then the error of the approximate equality (4) will be of order <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r082/r082810/r08281041.png" />. For <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r082/r082810/r08281042.png" /> the system of equations (5) is solvable and the approximate equality (4) can be constructed. Similarly, one can write approximate equalities for the unknown values <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r082/r082810/r08281043.png" /> in the right-hand side of (4); moreover, restrictions on their accuracy may be in lowering the order, etc.
+
then the error of the approximate equality (4) will be of order $  \theta  ^ {p+} 1 $.  
 +
For $  p \leq  2q $
 +
the system of equations (5) is solvable and the approximate equality (4) can be constructed. Similarly, one can write approximate equalities for the unknown values $  u( t _ {j} + \alpha _ {i} \theta) $
 +
in the right-hand side of (4); moreover, restrictions on their accuracy may be in lowering the order, etc.
  
 
An example of a one-step method constructed in the way mentioned above is the following method with third order of accuracy, of predictor-corrector type (see [[#References|[6]]]):
 
An example of a one-step method constructed in the way mentioned above is the following method with third order of accuracy, of predictor-corrector type (see [[#References|[6]]]):
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r082/r082810/r08281044.png" /></td> </tr></table>
+
$$
 +
y _ {j+} 1/4 \ = y _ {j} +
 +
\frac{1}{4}
 +
\theta f( t _ {j} , y _ {j} ) ,
 +
$$
 +
 
 +
$$
 +
y _ {j+} 1/2 \ = y _ {j} +
 +
\frac{1}{2}
 +
\theta f
 +
\left ( t _ {j} +
 +
\frac \theta {4}
 +
, y _ {i+ 1/4 }  \right ) ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r082/r082810/r08281045.png" /></td> </tr></table>
+
$$
 +
y _ {j+} 1  ^ {*} \ = y _ {j} + \theta f \left ( t _ {j} +
 +
\frac{1}{2}
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r082/r082810/r08281046.png" /></td> </tr></table>
+
\theta , y _ {j+} 1/2 \right ) ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r082/r082810/r08281047.png" /></td> </tr></table>
+
$$
 +
y _ {j+} 1 \ = y _ {j} +
 +
\frac{1}{6}
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r082/r082810/r08281048.png" /></td> </tr></table>
+
\theta \left ( f( t _ {j} , y _ {j)} + 4 f \left ( t _ {j} +
 +
\frac{1}{2}
 +
\theta , y _ {j+} 1/2 \right ) \right . +
 +
$$
  
If one assumes that in (4) one of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r082/r082810/r08281049.png" /> is <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r082/r082810/r08281050.png" />, then in the same way one can also construct implicit methods, e.g., the method
+
$$
 +
+ \left .
 +
f( t _ {j} + \theta , y _ {j+} 1  ^ {*} ) \right ) .
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r082/r082810/r08281051.png" /></td> </tr></table>
+
If one assumes that in (4) one of  $  \alpha _ {i} $
 +
is  $  1 $,
 +
then in the same way one can also construct implicit methods, e.g., the method
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r082/r082810/r08281052.png" /></td> </tr></table>
+
$$
 +
y _ {j+} 1 \ = y _ {j} +
 +
\frac{1}{2}
 +
\theta
 +
( f( t _ {j} , y _ {j+} 1 - \theta f( t _ {j} +
 +
\theta , y _ {j+} 1 )) +
 +
$$
 +
 
 +
$$
 +
+
 +
{} f( t _ {j} + \theta , y _ {j+} 1 ))
 +
$$
  
 
with second order of accuracy.
 
with second order of accuracy.
Line 73: Line 203:
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  C. Runge,  "Ueber die numerische Auflösung von Differentialgleichungen"  ''Math. Ann.'' , '''46'''  (1895)  pp. 167–178</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  W. Kutta,  "Beitrag zur naherungsweisen Integration von Differentialgleichungen"  ''Z. Math. und Phys.'' , '''46'''  (1901)  pp. 435–453</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  N.S. Bakhvalov,  "Numerical methods: analysis, algebra, ordinary differential equations" , MIR  (1977)  (Translated from Russian)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  J.C. Butcher,  "Implicit Runge–Kutta processes"  ''Math. Comp.'' , '''18'''  (1964)  pp. 50–64</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top">  V.V. Bobkov,  "A method for constructing one-step rules for the approximate solution of differential equations"  ''Vestsi Akad. Navuk BSSR. Ser. Fiz. Mat. Navuk'' :  4  (1967)  pp. 27–35  (In Russian)</TD></TR><TR><TD valign="top">[6]</TD> <TD valign="top">  V.I. Krylov,  V.V. Bobkov,  P.I. Monastyrnyi,  "Numerical methods" , '''2''' , Moscow  (1977)  (In Russian)</TD></TR><TR><TD valign="top">[7]</TD> <TD valign="top">  L. Collatz,  "Numerical treatment of differential equations" , Springer  (1966)  (Translated from German)</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  C. Runge,  "Ueber die numerische Auflösung von Differentialgleichungen"  ''Math. Ann.'' , '''46'''  (1895)  pp. 167–178</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  W. Kutta,  "Beitrag zur naherungsweisen Integration von Differentialgleichungen"  ''Z. Math. und Phys.'' , '''46'''  (1901)  pp. 435–453</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  N.S. Bakhvalov,  "Numerical methods: analysis, algebra, ordinary differential equations" , MIR  (1977)  (Translated from Russian)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  J.C. Butcher,  "Implicit Runge–Kutta processes"  ''Math. Comp.'' , '''18'''  (1964)  pp. 50–64</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top">  V.V. Bobkov,  "A method for constructing one-step rules for the approximate solution of differential equations"  ''Vestsi Akad. Navuk BSSR. Ser. Fiz. Mat. Navuk'' :  4  (1967)  pp. 27–35  (In Russian)</TD></TR><TR><TD valign="top">[6]</TD> <TD valign="top">  V.I. Krylov,  V.V. Bobkov,  P.I. Monastyrnyi,  "Numerical methods" , '''2''' , Moscow  (1977)  (In Russian)</TD></TR><TR><TD valign="top">[7]</TD> <TD valign="top">  L. Collatz,  "Numerical treatment of differential equations" , Springer  (1966)  (Translated from German)</TD></TR></table>
 
 
  
 
====Comments====
 
====Comments====
 
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  J.C. Butcher,  "The numerical analysis of ordinary differential equations. Runge–Kutta and general linear methods" , Wiley  (1987)</TD></TR></table>
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  J.C. Butcher,  "The numerical analysis of ordinary differential equations. Runge–Kutta and general linear methods" , Wiley  (1987)</TD></TR></table>

Revision as of 08:12, 6 June 2020


A one-step method for numerically solving the Cauchy problem for a system of ordinary differential equations of the form

$$ \tag{1 } u ^ \prime \ = f ( t , u ) . $$

The principal idea of the Runge–Kutta method was proposed by C. Runge [1] and developed later by W. Kutta [2] and others. Originally, this idea was used only for constructing explicit schemes of the method, which were sought in the form

$$ \tag{2 } y _ {j+} 1 \ = y _ {j} + \theta sum _ { i= } 1 ^ { q } A _ {i} k _ {i} , $$

where

$$ y _ {j +\alpha } \ \cong u ( t _ {j} + \alpha\theta ) ,\ \ k _ {1} \ = f( t _ {j} , y _ {j} ) , $$

$$ k _ {n} \ = \left ( \theta _ {j} + \alpha _ {n} \theta , y _ {j} + \theta sum _ { m= } 1 ^ { n- } 1 \beta _ {nm} k _ {m} \right ) ,\ \ n = 2 \dots q , $$

and, moreover, the values of the constants $ A _ {i} $, $ \alpha _ {n} $, $ \beta _ {nm} $, $ i = 1 \dots q $; $ n = 2 \dots q $; $ m = 1 \dots n - 1 $, were defined from the requirement that the error of equation (2) on the exact solution of (1) should have the highest order (i.e. power of $ \theta $, the stepsize, in the leading term of the expansion).

In contrast to multi-step methods, the Runge–Kutta method, as other one-step methods, only requires the value at the last time point of the approximate solution and allows one to carry out calculations under initial conditions which are natural for equation (1). This allows one to use it directly also for not equally-spaced grids. However, since this method does not make use of information concerning the solution at previous nodes of the grid, it is, in general, less economical than, e.g., the Adams method.

The classical Runge–Kutta method (see, e.g., [3]) is the method

$$ y _ {j+} 1 \ = y _ {j} + { \frac{1}{6} } \theta( k _ {1} + 2 k _ {2} + 2 k _ {3} + k _ {4} ) , $$

$$ k _ {1} \ = f( t _ {j} , y _ {j} ) ,\ k _ {2} \ =\ f \left ( t _ {j} + { \frac{1}{2} } \theta , y _ {j} + { \frac{1}{2} } \theta k _ {1} \right ) , $$

$$ k _ {3} \ = f \left ( t _ {j} + { \frac{1}{2} } \theta , y _ {j} + { \frac{1}{2} } \theta k _ {2} \right ) ,\ \ k _ {4} \ = f( t _ {j} + \theta , y _ {j} + \theta k _ {3} ) , $$

which belongs to the family of methods with fourth order of accuracy of the form (2) with $ q = 4 $, depending on two free parameters. The simplest explicit Runge–Kutta with first order of accuracy is obtained from (2) when $ q = 1 $; it is also the most widely used. This method is known as Euler's method. From (2) for $ q = 2, 3 $ one can obtain families of Runge–Kutta methods with second and third order of accuracy depending on one and two free parameters, respectively. For $ q > 4 $ the above correspondence between the value of $ q $ and the optimal order of accuracy is no more valid. Runge–Kutta methods of the form (2) with fifth order of accuracy can be realized only for $ q = 6 $, sixth order — for $ q = 7 $, seventh order — for $ q = 9 $, etc. In this case, if one increases $ q $ by one, extension of the set of constants $ A _ {i} $, $ \alpha _ {i} $, $ \beta _ {nm} $ to be chosen in (2) is often insufficient to satisfy the conditions resulting from the requirement to increase the order of accuracy of an explicit Runge–Kutta method by one. To increase the number of parameters to be chosen in (2) one can consider, for example, the following extension of the construction of one-step methods based on the concept of Runge:

$$ \tag{3 } k _ {n} \ = f \left ( t _ {j} + \alpha _ {n} \theta , y _ {j} + \theta sum _ { m= } 1 ^ { q } \beta _ {nm} k _ {m} \right ) , $$

$$ n \ = 1 \dots q . $$

In general, methods of the form (2), (3) are already implicit. This complicates significantly their numerical implementation: The values $ k _ {n} $, $ n = 1 \dots q $, at each step have to be found from the system of, generally speaking, non-linear equations (3). However, due to a considerable increase in the number of constants to be chosen, such methods acquire the following property (see [4]): For any value of $ q $ there exists an implicit Runge–Kutta method of order of accuracy $ 2q $. Moreover, under such extension of the class of Runge–Kutta methods there arise methods well adapted to stiff differential systems (cf. Stiff differential system).

There exists one more modification (see, e.g., [5]) of Runge's concept of constructing one-step methods for numerically solving equations of the form (1). Namely, proceeding from (1) one writes

$$ u( t _ {j} + \theta) - u( t _ {j} ) \ = \theta int _ { 0 } ^ { 1 } f ( t _ {j} + \alpha \theta , u( t _ {j} + \alpha \theta)) d \alpha . $$

An approximation of the latter integral by a quadrature formula with $ q $ nodes gives

$$ \tag{4 } u( t _ {j} + \theta) \ \cong u( t _ {j} ) + \theta sum _ { i= } 1 ^ { q } A _ {i} f( t _ {j} + \alpha _ {i} \theta , u ( t _ {j} + \alpha _ {i} \theta)) . $$

If the choice of nodes $ \alpha _ {i} $ and coefficients $ A _ {i} $, $ i = 1 \dots q $, of the considered quadrature formula is submitted to the conditions

$$ \tag{5 } sum _ { i= } 1 ^ { q } A _ {i} \ = 1 ,\ \ sum _ { i= } 1 ^ { q } A _ {i} \alpha _ {i} ^ {n} \ = \frac{1}{n+} 1 , $$

$$ n \ = 1 \dots p - 1 , $$

then the error of the approximate equality (4) will be of order $ \theta ^ {p+} 1 $. For $ p \leq 2q $ the system of equations (5) is solvable and the approximate equality (4) can be constructed. Similarly, one can write approximate equalities for the unknown values $ u( t _ {j} + \alpha _ {i} \theta) $ in the right-hand side of (4); moreover, restrictions on their accuracy may be in lowering the order, etc.

An example of a one-step method constructed in the way mentioned above is the following method with third order of accuracy, of predictor-corrector type (see [6]):

$$ y _ {j+} 1/4 \ = y _ {j} + \frac{1}{4} \theta f( t _ {j} , y _ {j} ) , $$

$$ y _ {j+} 1/2 \ = y _ {j} + \frac{1}{2} \theta f \left ( t _ {j} + \frac \theta {4} , y _ {i+ 1/4 } \right ) , $$

$$ y _ {j+} 1 ^ {*} \ = y _ {j} + \theta f \left ( t _ {j} + \frac{1}{2} \theta , y _ {j+} 1/2 \right ) , $$

$$ y _ {j+} 1 \ = y _ {j} + \frac{1}{6} \theta \left ( f( t _ {j} , y _ {j)} + 4 f \left ( t _ {j} + \frac{1}{2} \theta , y _ {j+} 1/2 \right ) \right . + $$

$$ + \left . f( t _ {j} + \theta , y _ {j+} 1 ^ {*} ) \right ) . $$

If one assumes that in (4) one of $ \alpha _ {i} $ is $ 1 $, then in the same way one can also construct implicit methods, e.g., the method

$$ y _ {j+} 1 \ = y _ {j} + \frac{1}{2} \theta ( f( t _ {j} , y _ {j+} 1 - \theta f( t _ {j} + \theta , y _ {j+} 1 )) + $$

$$ + {} f( t _ {j} + \theta , y _ {j+} 1 )) $$

with second order of accuracy.

The approaches to the construction of numerical methods considered above for equations of type (1) can be extended to ordinary differential equations of higher orders (see [6], [7]) and are also used when constructing difference schemes in the case of partial differential equations.

References

[1] C. Runge, "Ueber die numerische Auflösung von Differentialgleichungen" Math. Ann. , 46 (1895) pp. 167–178
[2] W. Kutta, "Beitrag zur naherungsweisen Integration von Differentialgleichungen" Z. Math. und Phys. , 46 (1901) pp. 435–453
[3] N.S. Bakhvalov, "Numerical methods: analysis, algebra, ordinary differential equations" , MIR (1977) (Translated from Russian)
[4] J.C. Butcher, "Implicit Runge–Kutta processes" Math. Comp. , 18 (1964) pp. 50–64
[5] V.V. Bobkov, "A method for constructing one-step rules for the approximate solution of differential equations" Vestsi Akad. Navuk BSSR. Ser. Fiz. Mat. Navuk : 4 (1967) pp. 27–35 (In Russian)
[6] V.I. Krylov, V.V. Bobkov, P.I. Monastyrnyi, "Numerical methods" , 2 , Moscow (1977) (In Russian)
[7] L. Collatz, "Numerical treatment of differential equations" , Springer (1966) (Translated from German)

Comments

References

[a1] J.C. Butcher, "The numerical analysis of ordinary differential equations. Runge–Kutta and general linear methods" , Wiley (1987)
How to Cite This Entry:
Runge-Kutta method. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Runge-Kutta_method&oldid=22999
This article was adapted from an original article by V.V. Bobkov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article