Namespaces
Variants
Actions

Difference between revisions of "Adaptive Runge–Kutta method"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (latex details)
 
(2 intermediate revisions by 2 users not shown)
Line 1: Line 1:
 +
<!--This article has been texified automatically. Since there was no Nroff source code for this article,
 +
the semi-automatic procedure described at https://encyclopediaofmath.org/wiki/User:Maximilian_Janisch/latexlist
 +
was used.
 +
If the TeX and formula formatting is correct, please remove this message and the {{TEX|semi-auto}} category.
 +
 +
Out of 1 formulas, 1 were replaced by TEX code.-->
 +
 +
{{TEX|semi-auto}}{{TEX|done}}
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
''ARK method''
 
''ARK method''
  
An <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110320/a1103202.png" />-stage adaptive Runge–Kutta method for the computation of approximations <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110320/a1103203.png" /> for the solution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110320/a1103204.png" /> of an initial-value problem
+
An $  s $-
 +
stage adaptive Runge–Kutta method for the computation of approximations $  u _ {m} $
 +
for the solution $  y ( t _ {m} ) $
 +
of an initial-value problem
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110320/a1103205.png" /></td> </tr></table>
+
$$
 +
y  ^  \prime  = f ( t,y ) ,  y ( t _ {0} ) = y _ {0} ,  t \in [ t _ {0} ,t _ {e} ] ,
 +
$$
  
 
is given by
 
is given by
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110320/a1103206.png" /></td> </tr></table>
+
$$
 +
u _ {m + 1 }  ^ {( 1 ) } = u _ {m} ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110320/a1103207.png" /></td> </tr></table>
+
$$
 +
u _ {m + 1 }  ^ {( i ) } = R _ {0} ^ {( i ) } ( c _ {i} hT ) u _ {m} +
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110320/a1103208.png" /></td> </tr></table>
+
$$
 +
+
 +
h \sum _ {j = 1 } ^ { {i }  - 1 } A _ {ij }  ( hT ) \left [ f ( t _ {m} + c _ {j} h,u _ {m + 1 }  ^ {( j ) } ) - Tu _ {m + 1 }  ^ {( j ) } \right ] ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110320/a1103209.png" /></td> </tr></table>
+
$$
 +
i = 2 \dots s,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110320/a11032010.png" /></td> </tr></table>
+
$$
 +
u _ {m + 1 }  = R _ {0} ^ {( s + 1 ) } ( hT ) u _ {m} +
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110320/a11032011.png" /></td> </tr></table>
+
$$
 +
+
 +
h \sum _ {j = 1 } ^ { s }  B _ {j} ( hT ) \left [ f ( t _ {m} + c _ {j} h,u _ {m + 1 }  ^ {( j ) } ) - Tu _ {m + 1 }  ^ {( j ) } \right ] .
 +
$$
  
Here, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110320/a11032012.png" /> is an arbitrary matrix, for stability reasons usually <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110320/a11032013.png" />. For <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110320/a11032014.png" /> the method reduces to an explicit [[Runge–Kutta method|Runge–Kutta method]]. The <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110320/a11032015.png" /> are real parameters and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110320/a11032016.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110320/a11032017.png" />, are rational approximations to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110320/a11032018.png" /> for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110320/a11032019.png" />. The rational matrix functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110320/a11032020.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110320/a11032021.png" /> are defined by
+
Here, $  T $
 +
is an arbitrary matrix, for stability reasons usually $  T \approx f _ {y} ( t _ {m} ,u _ {m} ) $.  
 +
For $  T = 0 $
 +
the method reduces to an explicit [[Runge–Kutta method|Runge–Kutta method]]. The $  c _ {i} $
 +
are real parameters and $  R _ {0} ^ {( i ) } ( z ) $,  
 +
$  z \in \mathbf C $,  
 +
are rational approximations to $  e  ^ {z} $
 +
for $  z \rightarrow 0 $.  
 +
The rational matrix functions $  A _ {ij }  $,  
 +
$  B _ {j} $
 +
are defined by
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110320/a11032022.png" /></td> </tr></table>
+
$$
 +
A _ {ij }  ( z ) = \sum _ {l = 0 } ^ {  \rho  _ {i} } R _ {l + 1 }  ^ {( i ) } ( c _ {i} z ) c _ {i} ^ {l + 1 } \lambda _ {lj }  ^ {( i ) } ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110320/a11032023.png" /></td> </tr></table>
+
$$
 +
B _ {j} ( z ) = \sum _ {l = 0 } ^ {  \rho  _ {s + 1 }  } R _ {l + 1 }  ^ {( s + 1 ) } ( z ) \lambda _ {lj }  ^ {( s + 1 ) } ,
 +
$$
  
with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110320/a11032024.png" /> and
+
with $  \lambda _ {lj }  ^ {( i ) } \in \mathbf R $
 +
and
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110320/a11032025.png" /></td> </tr></table>
+
$$
 +
R _ {1} ^ {( i ) } ( z ) = {
 +
\frac{R _ {0} ^ {( i ) } ( z ) - 1 }{z}
 +
} ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110320/a11032026.png" /></td> </tr></table>
+
$$
 +
R _ {l + 1 }  ^ {( i ) } ( z ) = {
 +
\frac{lR _ {l} ^ {( i ) } ( z ) - 1 }{z}
 +
} .
 +
$$
  
The computation of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110320/a11032027.png" /> requires the solution of linear systems of algebraic equations only. The coefficients <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110320/a11032028.png" /> are determined to give a high order of consistency or B-consistency ([[#References|[a2]]]). Applied to the test equation of A-stability, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110320/a11032029.png" /> with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110320/a11032030.png" />, an adaptive Runge–Kutta method with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110320/a11032031.png" /> yields
+
The computation of $  u _ {m + 1 }  $
 +
requires the solution of linear systems of algebraic equations only. The coefficients $  \lambda _ {lj }  ^ {( i ) } $
 +
are determined to give a high order of consistency or B-consistency ([[#References|[a2]]]). Applied to the test equation of A-stability, $  y  ^  \prime  = \lambda y $
 +
with $  { \mathop{\rm Re} } \lambda \leq  0 $,  
 +
an adaptive Runge–Kutta method with $  T = \lambda $
 +
yields
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110320/a11032032.png" /></td> </tr></table>
+
$$
 +
u _ {m + 1 }  = R _ {0} ^ {( s + 1 ) } ( h \lambda ) u _ {m} .
 +
$$
  
By the corresponding choice of stability functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110320/a11032033.png" />, adaptive Runge–Kutta methods are A- or L-stable and therefore well suited for stiff systems (cf. [[Stiff differential system|Stiff differential system]]). Furthermore, they can be easily adapted to the numerical solution of partitioned systems, where only a subsystem of dimension <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110320/a11032034.png" /> is stiff. Here, by a corresponding choice of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110320/a11032035.png" /> the dimension of the linear systems to be solved can be reduced to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110320/a11032036.png" /> [[#References|[a1]]].
+
By the corresponding choice of stability functions $  R _ {0} ^ {( s + 1 ) } ( z ) $,  
 +
adaptive Runge–Kutta methods are A- or L-stable and therefore well suited for stiff systems (cf. [[Stiff differential system]]). Furthermore, they can be easily adapted to the numerical solution of partitioned systems, where only a subsystem of dimension $  n _ {s} < n $
 +
is stiff. Here, by a corresponding choice of $  T $
 +
the dimension of the linear systems to be solved can be reduced to $  n _ {s} $[[#References|[a1]]].
  
 
====References====
 
====References====
<table><TR><TD valign="top">[a1]</TD> <TD valign="top"> K. Strehmel,   R. Weiner,   "Partitioned adaptive Runge–Kutta methods and their stability"  ''Numer. Math.'' , '''45'''  (1984)  pp. 283–300</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top"> K. Strehmel,   R. Weiner,   "<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110320/a11032037.png" />-convergence results for linearly implicit one step methods"  ''BIT'' , '''27'''  (1987)  pp. 264–281</TD></TR></table>
+
<table>
 +
<tr><td valign="top">[a1]</td> <td valign="top"> K. Strehmel, R. Weiner, "Partitioned adaptive Runge–Kutta methods and their stability"  ''Numer. Math.'' , '''45'''  (1984)  pp. 283–300</td></tr>
 +
<tr><td valign="top">[a2]</td> <td valign="top"> K. Strehmel, R. Weiner, "$B$-convergence results for linearly implicit one step methods"  ''BIT'' , '''27'''  (1987)  pp. 264–281</td></tr>
 +
</table>

Latest revision as of 19:57, 8 February 2024


ARK method

An $ s $- stage adaptive Runge–Kutta method for the computation of approximations $ u _ {m} $ for the solution $ y ( t _ {m} ) $ of an initial-value problem

$$ y ^ \prime = f ( t,y ) , y ( t _ {0} ) = y _ {0} , t \in [ t _ {0} ,t _ {e} ] , $$

is given by

$$ u _ {m + 1 } ^ {( 1 ) } = u _ {m} , $$

$$ u _ {m + 1 } ^ {( i ) } = R _ {0} ^ {( i ) } ( c _ {i} hT ) u _ {m} + $$

$$ + h \sum _ {j = 1 } ^ { {i } - 1 } A _ {ij } ( hT ) \left [ f ( t _ {m} + c _ {j} h,u _ {m + 1 } ^ {( j ) } ) - Tu _ {m + 1 } ^ {( j ) } \right ] , $$

$$ i = 2 \dots s, $$

$$ u _ {m + 1 } = R _ {0} ^ {( s + 1 ) } ( hT ) u _ {m} + $$

$$ + h \sum _ {j = 1 } ^ { s } B _ {j} ( hT ) \left [ f ( t _ {m} + c _ {j} h,u _ {m + 1 } ^ {( j ) } ) - Tu _ {m + 1 } ^ {( j ) } \right ] . $$

Here, $ T $ is an arbitrary matrix, for stability reasons usually $ T \approx f _ {y} ( t _ {m} ,u _ {m} ) $. For $ T = 0 $ the method reduces to an explicit Runge–Kutta method. The $ c _ {i} $ are real parameters and $ R _ {0} ^ {( i ) } ( z ) $, $ z \in \mathbf C $, are rational approximations to $ e ^ {z} $ for $ z \rightarrow 0 $. The rational matrix functions $ A _ {ij } $, $ B _ {j} $ are defined by

$$ A _ {ij } ( z ) = \sum _ {l = 0 } ^ { \rho _ {i} } R _ {l + 1 } ^ {( i ) } ( c _ {i} z ) c _ {i} ^ {l + 1 } \lambda _ {lj } ^ {( i ) } , $$

$$ B _ {j} ( z ) = \sum _ {l = 0 } ^ { \rho _ {s + 1 } } R _ {l + 1 } ^ {( s + 1 ) } ( z ) \lambda _ {lj } ^ {( s + 1 ) } , $$

with $ \lambda _ {lj } ^ {( i ) } \in \mathbf R $ and

$$ R _ {1} ^ {( i ) } ( z ) = { \frac{R _ {0} ^ {( i ) } ( z ) - 1 }{z} } , $$

$$ R _ {l + 1 } ^ {( i ) } ( z ) = { \frac{lR _ {l} ^ {( i ) } ( z ) - 1 }{z} } . $$

The computation of $ u _ {m + 1 } $ requires the solution of linear systems of algebraic equations only. The coefficients $ \lambda _ {lj } ^ {( i ) } $ are determined to give a high order of consistency or B-consistency ([a2]). Applied to the test equation of A-stability, $ y ^ \prime = \lambda y $ with $ { \mathop{\rm Re} } \lambda \leq 0 $, an adaptive Runge–Kutta method with $ T = \lambda $ yields

$$ u _ {m + 1 } = R _ {0} ^ {( s + 1 ) } ( h \lambda ) u _ {m} . $$

By the corresponding choice of stability functions $ R _ {0} ^ {( s + 1 ) } ( z ) $, adaptive Runge–Kutta methods are A- or L-stable and therefore well suited for stiff systems (cf. Stiff differential system). Furthermore, they can be easily adapted to the numerical solution of partitioned systems, where only a subsystem of dimension $ n _ {s} < n $ is stiff. Here, by a corresponding choice of $ T $ the dimension of the linear systems to be solved can be reduced to $ n _ {s} $[a1].

References

[a1] K. Strehmel, R. Weiner, "Partitioned adaptive Runge–Kutta methods and their stability" Numer. Math. , 45 (1984) pp. 283–300
[a2] K. Strehmel, R. Weiner, "$B$-convergence results for linearly implicit one step methods" BIT , 27 (1987) pp. 264–281
How to Cite This Entry:
Adaptive Runge–Kutta method. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Adaptive_Runge%E2%80%93Kutta_method&oldid=15631
This article was adapted from an original article by R. Weiner (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article