Namespaces
Variants
Actions

Difference between revisions of "Kutta-Merson method"

From Encyclopedia of Mathematics
Jump to: navigation, search
(mistake corrected; see talk)
m (correction)
 
(2 intermediate revisions by 2 users not shown)
Line 1: Line 1:
 +
{{TEX|done}}
 
A five-stage [[Runge–Kutta method|Runge–Kutta method]] with fourth-order accuracy. Applied to the Cauchy problem
 
A five-stage [[Runge–Kutta method|Runge–Kutta method]] with fourth-order accuracy. Applied to the Cauchy problem
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k056/k056060/k0560601.png" /></td> <td valign="top" style="width:5%;text-align:right;">(1)</td></tr></table>
+
$$y'(x)=f(x,y(x)),\quad x_0\leq x\leq X,\quad y(x_0)=y_0,\tag{1}$$
  
 
the method is as follows:
 
the method is as follows:
\begin{equation}\label{eq2}\tag{2}
+
\begin{equation}\label{2}\tag{2}
 
\begin{gathered}
 
\begin{gathered}
 
k_1 = h f(x_0,y_0), \quad y_0 = y(x_0),\\
 
k_1 = h f(x_0,y_0), \quad y_0 = y(x_0),\\
Line 15: Line 16:
 
\end{gathered}
 
\end{gathered}
 
\end{equation}
 
\end{equation}
The number <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k056/k056060/k0560609.png" /> serves as an estimate of the error and is used for automatic selection of the integration step. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k056/k056060/k05606010.png" /> is the prescribed accuracy of the computation, the integration step is selected as follows. First choose some initial step and start the computation by \eqref{eq2} to obtain the number <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k056/k056060/k05606011.png" />. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k056/k056060/k05606012.png" />, divide the integration step by 2; if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k056/k056060/k05606013.png" />, double it. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k056/k056060/k05606014.png" />, the selected integration step is satisfactory. Now replace the initial point <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k056/k056060/k05606015.png" /> by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k056/k056060/k05606016.png" /> and repeat the entire procedure. This yields an approximate solution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k056/k056060/k05606017.png" />; the quantity <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k056/k056060/k05606018.png" /> is mainly auxiliary.
+
The number $R=0.2|y^1-y^2|$ serves as an estimate of the error and is used for automatic selection of the integration step. If $\epsilon$ is the prescribed accuracy of the computation, the integration step is selected as follows. First choose some initial step and start the computation by \eqref{2} to obtain the number $R$. If $R>\epsilon$, divide the integration step by 2; if $R\leq\epsilon/64$, double it. If $\epsilon/64<R\leq\epsilon$, the selected integration step is satisfactory. Now replace the initial point $x_0$ by $x_0+h$ and repeat the entire procedure. This yields an approximate solution $y^2$; the quantity $y^1$ is mainly auxiliary.
  
 
Since
 
Since
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k056/k056060/k05606019.png" /></td> </tr></table>
+
$$y^2(x_0+h)=y_0+\frac16k_1+\frac23k_4+\frac16hf(x_0+h,y^1(x_0+h)),$$
  
i.e. the formula for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k056/k056060/k05606020.png" /> is as it were  "nested"  in the formula for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k056/k056060/k05606021.png" />, the method described here for the estimation of the error and the selection of the integration step is known as an imbedded Runge–Kutta method.
+
i.e. the formula for $y^1$ is as it were  "nested"  in the formula for $y^2$, the method described here for the estimation of the error and the selection of the integration step is known as an imbedded Runge–Kutta method.
  
 
Standard programs for the Kutta–Merson method are available in Algol [[#References|[1]]], [[#References|[2]]].
 
Standard programs for the Kutta–Merson method are available in Algol [[#References|[1]]], [[#References|[2]]].
Line 33: Line 34:
 
Often Runge's name is added: Runge–Kutta–Merson method.
 
Often Runge's name is added: Runge–Kutta–Merson method.
  
The (Runge–) Kutta–Merson method is due to R.H. Merson [[#References|[a6]]]. The order of the numerical approximation defined by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k056/k056060/k05606022.png" /> is four and that of the auxiliary (reference) solution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k056/k056060/k05606023.png" /> is three. Hence, in general, the difference of these two numerical approximations is only of order three, so that a conservative error estimate results (i.e., it overestimates the local error <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k056/k056060/k05606024.png" /> for small <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k056/k056060/k05606025.png" />). However, for linear equations with constant coefficients, a correct estimate of the local error is obtained as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k056/k056060/k05606026.png" />. This can be shown by observing that for linear equations
+
The (Runge–) Kutta–Merson method is due to R.H. Merson [[#References|[a6]]]. The order of the numerical approximation defined by $y^2$ is four and that of the auxiliary (reference) solution $y^1$ is three. Hence, in general, the difference of these two numerical approximations is only of order three, so that a conservative error estimate results (i.e., it overestimates the local error $y(x_0+h)-y^2$ for small $h$). However, for linear equations with constant coefficients, a correct estimate of the local error is obtained as $h\to0$. This can be shown by observing that for linear equations
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k056/k056060/k05606027.png" /></td> </tr></table>
+
$$y^1=\left[1+h\frac{d}{dx}+\frac{\left(h\frac{d}{dx}\right)^2}{2}+\frac{\left(h\frac{d}{dx}\right)^3}{6}+\frac{\left(h\frac{d}{dx}\right)^4}{24}\right]y(x_0),$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k056/k056060/k05606028.png" /></td> </tr></table>
+
$$y^2=y^1+\frac{\left(h\frac{d}{dx}\right)^5}{144}y(x_0),$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k056/k056060/k05606029.png" /></td> </tr></table>
+
$$y(x_0+h)=y^1+\left[\frac{\left(h\frac{d}{dx}\right)^5}{120}+O(h^6)\right]y(x_0).$$
  
 
Thus,
 
Thus,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k056/k056060/k05606030.png" /></td> </tr></table>
+
$$R=0.2|y^1-y^2|=\left[\frac{\left(h\frac{d}{dx}\right)^5}{720}\right]y(x_0)$$
  
 
and
 
and
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k056/k056060/k05606031.png" /></td> </tr></table>
+
$$y(x_0+h)-y^2=\left[\frac{\left(h\frac{d}{dx}\right)^5}{720}+O(h^6)\right]y(x_0),$$
  
so that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k056/k056060/k05606032.png" /> equals the local error <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k056/k056060/k05606033.png" /> within order <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k056/k056060/k05606034.png" />. A further discussion of this method may be found in [[#References|[a1]]] and [[#References|[a5]]]. A Fortran code of the Kutta–Merson method is available in the NAG library. The Kutta–Merson method is the earliest proposed method belonging to the family of imbedded methods. Higher-order imbedded formulas providing asymptotically-correct approximations to the local error have been derived by E. Fehlberg [[#References|[a3]]], [[#References|[a4]]]; they have the additional feature that the error constants of the main formula (the formula of highest order) are minimized. However, since the relation between the true (global) error and the local error is generally not known, it is questionable whether one should use the highest-order formula as the main formula, and modern insights advocate to interchange the roles of the main formula and the reference formula (see [[#References|[a5]]]). The recently developed imbedded method of J.R. Dormand and P.J. Prince [[#References|[a2]]], which combines an eighth-order formula with seventh-order reference formula, is considered as one of the most efficient high-accuracy methods nowadays available [[#References|[a5]]].
+
so that $R$ equals the local error $y(x_0+h)-y^2$ within order $h^6$. A further discussion of this method may be found in [[#References|[a1]]] and [[#References|[a5]]]. A Fortran code of the Kutta–Merson method is available in the NAG library. The Kutta–Merson method is the earliest proposed method belonging to the family of imbedded methods. Higher-order imbedded formulas providing asymptotically-correct approximations to the local error have been derived by E. Fehlberg [[#References|[a3]]], [[#References|[a4]]]; they have the additional feature that the error constants of the main formula (the formula of highest order) are minimized. However, since the relation between the true (global) error and the local error is generally not known, it is questionable whether one should use the highest-order formula as the main formula, and modern insights advocate to interchange the roles of the main formula and the reference formula (see [[#References|[a5]]]). The recently developed imbedded method of J.R. Dormand and P.J. Prince [[#References|[a2]]], which combines an eighth-order formula with seventh-order reference formula, is considered as one of the most efficient high-accuracy methods nowadays available [[#References|[a5]]].
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  J.C. Butcher,  "The numerical analysis of ordinary differential equations. Runge–Kutta and general linear methods" , Wiley  (1987)</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  J.R. Dormand,  P.J. Prince,  "A family of embedded Runge–Kutta formulae"  ''J. Comp. Appl. Math.'' , '''6'''  (1980)  pp. 19–26</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top">  E. Fehlberg,  "Classical fifth-, sixth-, seventh-, and eighth-order Runge–Kutta formulas with stepsize control"  ''NASA Techn. Rep.'' , '''287'''  (Abstract in: Computing <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k056/k056060/k05606035.png" /> (1969), 93–106)</TD></TR><TR><TD valign="top">[a4]</TD> <TD valign="top">  E. Fehlberg,  "Low-order classical Runge–Kutta formulas with stepsize control and their application to some heat transfer problems"  ''NASA Techn. Rep.'' , '''315'''  (Abstract in: Computing <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k056/k056060/k05606036.png" /> (1969), 61–71)</TD></TR><TR><TD valign="top">[a5]</TD> <TD valign="top">  E. Hairer,  S.P. Nørsett,  G. Wanner,  "Solving ordinary differential equations" , '''I. Nonstiff problems''' , Springer  (1987)</TD></TR><TR><TD valign="top">[a6]</TD> <TD valign="top">  R.H. Merson,  "An operational method for the study of integration processes" , ''Proc. Symp. Data Processing'' , Weapons Res. Establ. Salisbury , Salisbury  (1957)  pp. 110–125</TD></TR><TR><TD valign="top">[a7]</TD> <TD valign="top">  S.O. Fatunla,  "Numerical methods for initial value problems in ordinary differential equations" , Acad. Press  (1988)</TD></TR></table>
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  J.C. Butcher,  "The numerical analysis of ordinary differential equations. Runge–Kutta and general linear methods" , Wiley  (1987)</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  J.R. Dormand,  P.J. Prince,  "A family of embedded Runge–Kutta formulae"  ''J. Comp. Appl. Math.'' , '''6'''  (1980)  pp. 19–26</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top">  E. Fehlberg,  "Classical fifth-, sixth-, seventh-, and eighth-order Runge–Kutta formulas with stepsize control"  ''NASA Techn. Rep.'' , '''287'''  (Abstract in: Computing <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k056/k056060/k05606035.png" /> (1969), 93–106)</TD></TR><TR><TD valign="top">[a4]</TD> <TD valign="top">  E. Fehlberg,  "Low-order classical Runge–Kutta formulas with stepsize control and their application to some heat transfer problems"  ''NASA Techn. Rep.'' , '''315'''  (Abstract in: Computing <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k056/k056060/k05606036.png" /> (1969), 61–71)</TD></TR><TR><TD valign="top">[a5]</TD> <TD valign="top">  E. Hairer,  S.P. Nørsett,  G. Wanner,  "Solving ordinary differential equations" , '''I. Nonstiff problems''' , Springer  (1987)</TD></TR><TR><TD valign="top">[a6]</TD> <TD valign="top">  R.H. Merson,  "An operational method for the study of integration processes" , ''Proc. Symp. Data Processing'' , Weapons Res. Establ. Salisbury , Salisbury  (1957)  pp. 110–125</TD></TR><TR><TD valign="top">[a7]</TD> <TD valign="top">  S.O. Fatunla,  "Numerical methods for initial value problems in ordinary differential equations" , Acad. Press  (1988)</TD></TR></table>
 +
 +
[[Category:Numerical analysis and scientific computing]]

Latest revision as of 17:13, 14 February 2020

A five-stage Runge–Kutta method with fourth-order accuracy. Applied to the Cauchy problem

$$y'(x)=f(x,y(x)),\quad x_0\leq x\leq X,\quad y(x_0)=y_0,\tag{1}$$

the method is as follows: \begin{equation}\label{2}\tag{2} \begin{gathered} k_1 = h f(x_0,y_0), \quad y_0 = y(x_0),\\ k_2 = h f \big( x_0 + \tfrac13 h, y_0 + \tfrac13 k_1 \big),\\ k_3 = h f \big( x_0 + \tfrac13 h, y_0 + \tfrac16 k_1 + \tfrac16 k_2 \big),\\ k_4 = h f \big( x_0 + \tfrac12 h, y_0 + \tfrac18 k_1 + \tfrac38 k_3 \big),\\ k_5 = h f \big( x_0 + h, y_0 + \tfrac12 k_1 - \tfrac32 k_3 + 2 k_4 \big),\\ y^1(x_0+h) = y_0 + \tfrac12 k_1 - \tfrac32 k_3 + 2k_4,\\ y^2(x_0+h) = y_0 + \tfrac16 k_1 + \tfrac23 k_4 + \tfrac16 k_5. \end{gathered} \end{equation} The number $R=0.2|y^1-y^2|$ serves as an estimate of the error and is used for automatic selection of the integration step. If $\epsilon$ is the prescribed accuracy of the computation, the integration step is selected as follows. First choose some initial step and start the computation by \eqref{2} to obtain the number $R$. If $R>\epsilon$, divide the integration step by 2; if $R\leq\epsilon/64$, double it. If $\epsilon/64<R\leq\epsilon$, the selected integration step is satisfactory. Now replace the initial point $x_0$ by $x_0+h$ and repeat the entire procedure. This yields an approximate solution $y^2$; the quantity $y^1$ is mainly auxiliary.

Since

$$y^2(x_0+h)=y_0+\frac16k_1+\frac23k_4+\frac16hf(x_0+h,y^1(x_0+h)),$$

i.e. the formula for $y^1$ is as it were "nested" in the formula for $y^2$, the method described here for the estimation of the error and the selection of the integration step is known as an imbedded Runge–Kutta method.

Standard programs for the Kutta–Merson method are available in Algol [1], [2].

References

[1] J. Christiansen, "Numerical solution of ordinary simultaneous differential equations of the 1st order using a method for automatic step change" Numer. Math. , 14 (1970) pp. 317–324
[2] P.M. Lukehart, "Algorithm 218. Kutta Merson" Comm. Assoc. Comput. Mach. , 6 : 12 (1963) pp. 737–738
[3] L. Fox, "Numerical solution of ordinary and partial differential equations" , Pergamon (1962)
[4] G.N. Lance, "Numerical methods for high speed computers" , Iliffe (1960)


Comments

Often Runge's name is added: Runge–Kutta–Merson method.

The (Runge–) Kutta–Merson method is due to R.H. Merson [a6]. The order of the numerical approximation defined by $y^2$ is four and that of the auxiliary (reference) solution $y^1$ is three. Hence, in general, the difference of these two numerical approximations is only of order three, so that a conservative error estimate results (i.e., it overestimates the local error $y(x_0+h)-y^2$ for small $h$). However, for linear equations with constant coefficients, a correct estimate of the local error is obtained as $h\to0$. This can be shown by observing that for linear equations

$$y^1=\left[1+h\frac{d}{dx}+\frac{\left(h\frac{d}{dx}\right)^2}{2}+\frac{\left(h\frac{d}{dx}\right)^3}{6}+\frac{\left(h\frac{d}{dx}\right)^4}{24}\right]y(x_0),$$

$$y^2=y^1+\frac{\left(h\frac{d}{dx}\right)^5}{144}y(x_0),$$

$$y(x_0+h)=y^1+\left[\frac{\left(h\frac{d}{dx}\right)^5}{120}+O(h^6)\right]y(x_0).$$

Thus,

$$R=0.2|y^1-y^2|=\left[\frac{\left(h\frac{d}{dx}\right)^5}{720}\right]y(x_0)$$

and

$$y(x_0+h)-y^2=\left[\frac{\left(h\frac{d}{dx}\right)^5}{720}+O(h^6)\right]y(x_0),$$

so that $R$ equals the local error $y(x_0+h)-y^2$ within order $h^6$. A further discussion of this method may be found in [a1] and [a5]. A Fortran code of the Kutta–Merson method is available in the NAG library. The Kutta–Merson method is the earliest proposed method belonging to the family of imbedded methods. Higher-order imbedded formulas providing asymptotically-correct approximations to the local error have been derived by E. Fehlberg [a3], [a4]; they have the additional feature that the error constants of the main formula (the formula of highest order) are minimized. However, since the relation between the true (global) error and the local error is generally not known, it is questionable whether one should use the highest-order formula as the main formula, and modern insights advocate to interchange the roles of the main formula and the reference formula (see [a5]). The recently developed imbedded method of J.R. Dormand and P.J. Prince [a2], which combines an eighth-order formula with seventh-order reference formula, is considered as one of the most efficient high-accuracy methods nowadays available [a5].

References

[a1] J.C. Butcher, "The numerical analysis of ordinary differential equations. Runge–Kutta and general linear methods" , Wiley (1987)
[a2] J.R. Dormand, P.J. Prince, "A family of embedded Runge–Kutta formulae" J. Comp. Appl. Math. , 6 (1980) pp. 19–26
[a3] E. Fehlberg, "Classical fifth-, sixth-, seventh-, and eighth-order Runge–Kutta formulas with stepsize control" NASA Techn. Rep. , 287 (Abstract in: Computing (1969), 93–106)
[a4] E. Fehlberg, "Low-order classical Runge–Kutta formulas with stepsize control and their application to some heat transfer problems" NASA Techn. Rep. , 315 (Abstract in: Computing (1969), 61–71)
[a5] E. Hairer, S.P. Nørsett, G. Wanner, "Solving ordinary differential equations" , I. Nonstiff problems , Springer (1987)
[a6] R.H. Merson, "An operational method for the study of integration processes" , Proc. Symp. Data Processing , Weapons Res. Establ. Salisbury , Salisbury (1957) pp. 110–125
[a7] S.O. Fatunla, "Numerical methods for initial value problems in ordinary differential equations" , Acad. Press (1988)
How to Cite This Entry:
Kutta-Merson method. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Kutta-Merson_method&oldid=32519
This article was adapted from an original article by V.V. Pospelov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article