Namespaces
Variants
Actions

Difference between revisions of "Adams method"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (tex encoded by computer)
Line 1: Line 1:
 +
<!--
 +
a0105801.png
 +
$#A+1 = 31 n = 0
 +
$#C+1 = 31 : ~/encyclopedia/old_files/data/A010/A.0100580 Adams method
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
A finite-difference method for solving Cauchy's problem for systems of first-order differential equations
 
A finite-difference method for solving Cauchy's problem for systems of first-order differential equations
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010580/a0105801.png" /></td> </tr></table>
+
$$
 +
y  ^  \prime  = f ( x , y ),\  y ( x _ {0} )  = y _ {0} .
 +
$$
  
In integration over a grid with a constant step <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010580/a0105802.png" /> the computational formulas may be based a) on extrapolation
+
In integration over a grid with a constant step $  x _ {n} = x _ {0} + nh $
 +
the computational formulas may be based a) on extrapolation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010580/a0105803.png" /></td> </tr></table>
+
$$
 +
y _ {n + 1 }  = y _ {n} + h \sum _ {\lambda = 0 } ^ { k }
 +
u _ {- \lambda }  f ( x _ {n - \lambda }  , y _ {n - \lambda }  )
 +
;
 +
$$
  
 
or b) on interpolation
 
or b) on interpolation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010580/a0105804.png" /></td> </tr></table>
+
$$
 +
y _ {n + 1 }  = y _ {n} + h \sum _ {\lambda = - 1 } ^ { {k }  - 1 } v _ {- \lambda }  f ( x _ {n - \lambda }  ,
 +
y _ {n - \lambda }  ) .
 +
$$
  
For a given <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010580/a0105805.png" />, the latter formula gives more accurate results but a non-linear system of equations must be solved in order to find <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010580/a0105806.png" />.
+
For a given $  k $,  
 +
the latter formula gives more accurate results but a non-linear system of equations must be solved in order to find $  y _ {n+1} $.
  
 
In practical work, an approximation by the former method is found and one or two corrections are performed according to the formula
 
In practical work, an approximation by the former method is found and one or two corrections are performed according to the formula
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010580/a0105807.png" /></td> </tr></table>
+
$$
 +
y _ {n+1}  ^ {(i+1)}  = y _ {n} + h \sum _ {\lambda = 0 } ^ { k-1 }  v _ {- \lambda }  f ( x _ {n - \lambda }  , y _ {n - \lambda }  )
 +
+ hv _ {1} f ( x _ {n + 1 }  , y _ {n + 1 }  ^ {(i)} ) ,
 +
$$
  
which converge pointwise if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010580/a0105808.png" />. The initial conditions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010580/a0105809.png" /> for Adams' method, which are needed to begin the calculations by formula a), must be found by some special method. The error in the solution can be written as
+
which converge pointwise if $  h  | v _ {1} |  \| ( \partial  f / \partial  y ) \| < 1 $.  
 +
The initial conditions $  y _ {1} \dots y _ {k} $
 +
for Adams' method, which are needed to begin the calculations by formula a), must be found by some special method. The error in the solution can be written as
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010580/a01058010.png" /></td> </tr></table>
+
$$
 +
\chi _ {k+1} \left ( \int\limits _ { x } _ {0} ^ { x } _ {n} \Omega ( x , t ) y  ^ {(k+2)} ( t )  dt  \right )  h  ^ {k+1} + O
 +
( h  ^ {k+2} ) ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010580/a01058011.png" /> is the solution of the system
+
where $  \Omega (x, t) $
 +
is the solution of the system
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010580/a01058012.png" /></td> </tr></table>
+
$$
  
when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010580/a01058013.png" />.
+
\frac{d \Omega ( x , t ) }{dx}
 +
  = f _ {y} ( x , y ( x ) ) \Omega
 +
( x , t )
 +
$$
  
The structure of the term <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010580/a01058014.png" /> is such that, for small values of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010580/a01058015.png" />, it is usually uniformly small as compared to the main term on large intervals of integration. This means that Adams' method can be applied over a large integration interval in the case of an absolutely-stable solution of the differential problem. In particular, as distinct from the [[Milne method|Milne method]], it may be used for finding stable periodic solutions of differential equations. The standard Adams procedure for integration with automatic step selection is much more involved than the standard [[Runge–Kutta method|Runge–Kutta method]], since the step-changing algorithm is much more involved and the selection of the initial values of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010580/a01058016.png" /> is not standardized.
+
when  $  \Omega (t, t) = E $.
  
In the case of the equations <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010580/a01058017.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010580/a01058018.png" />, the extrapolation formula a) has the form
+
The structure of the term  $  O (h  ^ {k+2} ) $
 +
is such that, for small values of $  h $,
 +
it is usually uniformly small as compared to the main term on large intervals of integration. This means that Adams' method can be applied over a large integration interval in the case of an absolutely-stable solution of the differential problem. In particular, as distinct from the [[Milne method|Milne method]], it may be used for finding stable periodic solutions of differential equations. The standard Adams procedure for integration with automatic step selection is much more involved than the standard [[Runge–Kutta method|Runge–Kutta method]], since the step-changing algorithm is much more involved and the selection of the initial values of  $  y _ {k} $
 +
is not standardized.
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010580/a01058019.png" /></td> </tr></table>
+
In the case of the equations  $  y  ^  \prime  = -ay $,
 +
$  a > 0 $,
 +
the extrapolation formula a) has the form
  
Particular solutions of this equation are <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010580/a01058020.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010580/a01058021.png" /> is a root of the equation
+
$$
 +
y _ {n+1}  = y _ {n} + h \sum _ {\lambda = 0 } ^ { k }
 +
u _ {- \lambda }  ( - ay _ {n - \lambda }  ) .
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010580/a01058022.png" /></td> </tr></table>
+
Particular solutions of this equation are  $  y _ {n} = C \mu  ^ {n} $,
 +
where  $  \mu $
 +
is a root of the equation
  
If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010580/a01058023.png" />, then one root of this equation is <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010580/a01058024.png" />, and the rounding-off errors rapidly increase. When integrating with automatic step selection, this results in an unjustified diminution of the step size. However, Adams' method proves to be more economical than the Runge–Kutta method in most cases; it was first introduced by J.C. Adams in 1855.
+
$$
 +
\mu  ^ {k+1}  = \mu  ^ {k} - ah \sum _ {\lambda = 0 } ^ { k }
 +
u _ {- \lambda }  \mu ^ {k - \lambda } .
 +
$$
 +
 
 +
If  $  ah \sum _ {\lambda = 0 }  ^ {k} | u {- \lambda } | > 2 $,  
 +
then one root of this equation is $  | \mu | > 1 $,  
 +
and the rounding-off errors rapidly increase. When integrating with automatic step selection, this results in an unjustified diminution of the step size. However, Adams' method proves to be more economical than the Runge–Kutta method in most cases; it was first introduced by J.C. Adams in 1855.
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  I.S. Berezin,  N.P. Zhidkov,  "Computing methods" , '''2''' , Pergamon  (1973)  (Translated from Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  N.S. Bakhvalov,  "Numerical methods: analysis, algebra, ordinary differential equations" , MIR  (1977)  (Translated from Russian)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  A.N. Tikhonov,  A.D. Gorbunov,  "Asymptotic expansions of the error in the difference method of solving Cauchy's problem for systems of differential equations"  ''USSR Comput. Math. Math. Phys.'' , '''2''' :  4  (1962)  pp. 565–586  ''Zh. Vychisl. Mat. i Mat. Fiz.'' , '''2''' :  4  (1962)  pp. 537–548</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  S.M. Lozinskii,  "An estimate of the error of numerical integration of differential equations I"  ''Izv. Vyssh. Uchebn. Zaved. Mat.'' , '''5''' :  6  (1958)  pp. 52–90  (In Russian)</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top">  N.S. Bakhvalov,  "On an estimate of the error at numerical integration of differential equations by Adams' extrapolation method"  ''Dokl. Akad. Nauk SSSR'' , '''104''' :  5  (1955)  pp. 683–686  (In Russian)</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  I.S. Berezin,  N.P. Zhidkov,  "Computing methods" , '''2''' , Pergamon  (1973)  (Translated from Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  N.S. Bakhvalov,  "Numerical methods: analysis, algebra, ordinary differential equations" , MIR  (1977)  (Translated from Russian)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  A.N. Tikhonov,  A.D. Gorbunov,  "Asymptotic expansions of the error in the difference method of solving Cauchy's problem for systems of differential equations"  ''USSR Comput. Math. Math. Phys.'' , '''2''' :  4  (1962)  pp. 565–586  ''Zh. Vychisl. Mat. i Mat. Fiz.'' , '''2''' :  4  (1962)  pp. 537–548</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  S.M. Lozinskii,  "An estimate of the error of numerical integration of differential equations I"  ''Izv. Vyssh. Uchebn. Zaved. Mat.'' , '''5''' :  6  (1958)  pp. 52–90  (In Russian)</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top">  N.S. Bakhvalov,  "On an estimate of the error at numerical integration of differential equations by Adams' extrapolation method"  ''Dokl. Akad. Nauk SSSR'' , '''104''' :  5  (1955)  pp. 683–686  (In Russian)</TD></TR></table>
 
 
  
 
====Comments====
 
====Comments====
The Adams method is a special <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010580/a01058025.png" /> or <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010580/a01058026.png" /> multistep method. It yields explicit difference equations (predictor formula) in the extrapolation case a) and implicit difference equations (corrector formula) in the interpolation case b). Certain special case are known as Adams–Bashforth methods, viz. the series of explicit methods <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010580/a01058027.png" /> (Euler's method); <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010580/a01058028.png" />; <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010580/a01058029.png" />; etc.
+
The Adams method is a special $  k $
 +
or $  ( k + 1 ) $
 +
multistep method. It yields explicit difference equations (predictor formula) in the extrapolation case a) and implicit difference equations (corrector formula) in the interpolation case b). Certain special case are known as Adams–Bashforth methods, viz. the series of explicit methods $  k = 0 , u _ {0} = 1 $(
 +
Euler's method); $  k = 1 , u _ {0} = 3 / 2 , u _ {-1} = - 1/2 $;  
 +
$  k = 2 , u _ {0} = 23 / 12 , u _ {-1} = - 4 / 3 , u _ {-2} = 5 / 12 $;  
 +
etc.
  
A certain series of implicit methods is known as Adams–Moulton methods, viz. <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010580/a01058030.png" /> (the trapezoidal rule); <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010580/a01058031.png" />; etc.
+
A certain series of implicit methods is known as Adams–Moulton methods, viz. $  k = 1 , v _ {1} = 1 / 2 , v _ {0} = 1 / 2 $(
 +
the trapezoidal rule); $  k = 2 , v _ {1} = 5 / 12 , v _ {0} = 2 / 3 , v _ {-1} = - 1 / 12 $;  
 +
etc.
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  L. Fox,  D.F. Mayers,  "Computing methods for scientists and engineers" , Clarendon Press  (1968)</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  J.B. Rosser,  "Solving differential equations on a hand-held programmable calculator"  G.H. Golub (ed.) , ''Studies in numerical analysis'' , Math. Assoc. Amer.  (1984)  pp. 199–242</TD></TR></table>
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  L. Fox,  D.F. Mayers,  "Computing methods for scientists and engineers" , Clarendon Press  (1968)</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  J.B. Rosser,  "Solving differential equations on a hand-held programmable calculator"  G.H. Golub (ed.) , ''Studies in numerical analysis'' , Math. Assoc. Amer.  (1984)  pp. 199–242</TD></TR></table>

Revision as of 16:08, 1 April 2020


A finite-difference method for solving Cauchy's problem for systems of first-order differential equations

$$ y ^ \prime = f ( x , y ),\ y ( x _ {0} ) = y _ {0} . $$

In integration over a grid with a constant step $ x _ {n} = x _ {0} + nh $ the computational formulas may be based a) on extrapolation

$$ y _ {n + 1 } = y _ {n} + h \sum _ {\lambda = 0 } ^ { k } u _ {- \lambda } f ( x _ {n - \lambda } , y _ {n - \lambda } ) ; $$

or b) on interpolation

$$ y _ {n + 1 } = y _ {n} + h \sum _ {\lambda = - 1 } ^ { {k } - 1 } v _ {- \lambda } f ( x _ {n - \lambda } , y _ {n - \lambda } ) . $$

For a given $ k $, the latter formula gives more accurate results but a non-linear system of equations must be solved in order to find $ y _ {n+1} $.

In practical work, an approximation by the former method is found and one or two corrections are performed according to the formula

$$ y _ {n+1} ^ {(i+1)} = y _ {n} + h \sum _ {\lambda = 0 } ^ { k-1 } v _ {- \lambda } f ( x _ {n - \lambda } , y _ {n - \lambda } ) + hv _ {1} f ( x _ {n + 1 } , y _ {n + 1 } ^ {(i)} ) , $$

which converge pointwise if $ h | v _ {1} | \| ( \partial f / \partial y ) \| < 1 $. The initial conditions $ y _ {1} \dots y _ {k} $ for Adams' method, which are needed to begin the calculations by formula a), must be found by some special method. The error in the solution can be written as

$$ \chi _ {k+1} \left ( \int\limits _ { x } _ {0} ^ { x } _ {n} \Omega ( x , t ) y ^ {(k+2)} ( t ) dt \right ) h ^ {k+1} + O ( h ^ {k+2} ) , $$

where $ \Omega (x, t) $ is the solution of the system

$$ \frac{d \Omega ( x , t ) }{dx} = f _ {y} ( x , y ( x ) ) \Omega ( x , t ) $$

when $ \Omega (t, t) = E $.

The structure of the term $ O (h ^ {k+2} ) $ is such that, for small values of $ h $, it is usually uniformly small as compared to the main term on large intervals of integration. This means that Adams' method can be applied over a large integration interval in the case of an absolutely-stable solution of the differential problem. In particular, as distinct from the Milne method, it may be used for finding stable periodic solutions of differential equations. The standard Adams procedure for integration with automatic step selection is much more involved than the standard Runge–Kutta method, since the step-changing algorithm is much more involved and the selection of the initial values of $ y _ {k} $ is not standardized.

In the case of the equations $ y ^ \prime = -ay $, $ a > 0 $, the extrapolation formula a) has the form

$$ y _ {n+1} = y _ {n} + h \sum _ {\lambda = 0 } ^ { k } u _ {- \lambda } ( - ay _ {n - \lambda } ) . $$

Particular solutions of this equation are $ y _ {n} = C \mu ^ {n} $, where $ \mu $ is a root of the equation

$$ \mu ^ {k+1} = \mu ^ {k} - ah \sum _ {\lambda = 0 } ^ { k } u _ {- \lambda } \mu ^ {k - \lambda } . $$

If $ ah \sum _ {\lambda = 0 } ^ {k} | u {- \lambda } | > 2 $, then one root of this equation is $ | \mu | > 1 $, and the rounding-off errors rapidly increase. When integrating with automatic step selection, this results in an unjustified diminution of the step size. However, Adams' method proves to be more economical than the Runge–Kutta method in most cases; it was first introduced by J.C. Adams in 1855.

References

[1] I.S. Berezin, N.P. Zhidkov, "Computing methods" , 2 , Pergamon (1973) (Translated from Russian)
[2] N.S. Bakhvalov, "Numerical methods: analysis, algebra, ordinary differential equations" , MIR (1977) (Translated from Russian)
[3] A.N. Tikhonov, A.D. Gorbunov, "Asymptotic expansions of the error in the difference method of solving Cauchy's problem for systems of differential equations" USSR Comput. Math. Math. Phys. , 2 : 4 (1962) pp. 565–586 Zh. Vychisl. Mat. i Mat. Fiz. , 2 : 4 (1962) pp. 537–548
[4] S.M. Lozinskii, "An estimate of the error of numerical integration of differential equations I" Izv. Vyssh. Uchebn. Zaved. Mat. , 5 : 6 (1958) pp. 52–90 (In Russian)
[5] N.S. Bakhvalov, "On an estimate of the error at numerical integration of differential equations by Adams' extrapolation method" Dokl. Akad. Nauk SSSR , 104 : 5 (1955) pp. 683–686 (In Russian)

Comments

The Adams method is a special $ k $ or $ ( k + 1 ) $ multistep method. It yields explicit difference equations (predictor formula) in the extrapolation case a) and implicit difference equations (corrector formula) in the interpolation case b). Certain special case are known as Adams–Bashforth methods, viz. the series of explicit methods $ k = 0 , u _ {0} = 1 $( Euler's method); $ k = 1 , u _ {0} = 3 / 2 , u _ {-1} = - 1/2 $; $ k = 2 , u _ {0} = 23 / 12 , u _ {-1} = - 4 / 3 , u _ {-2} = 5 / 12 $; etc.

A certain series of implicit methods is known as Adams–Moulton methods, viz. $ k = 1 , v _ {1} = 1 / 2 , v _ {0} = 1 / 2 $( the trapezoidal rule); $ k = 2 , v _ {1} = 5 / 12 , v _ {0} = 2 / 3 , v _ {-1} = - 1 / 12 $; etc.

References

[a1] L. Fox, D.F. Mayers, "Computing methods for scientists and engineers" , Clarendon Press (1968)
[a2] J.B. Rosser, "Solving differential equations on a hand-held programmable calculator" G.H. Golub (ed.) , Studies in numerical analysis , Math. Assoc. Amer. (1984) pp. 199–242
How to Cite This Entry:
Adams method. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Adams_method&oldid=12695
This article was adapted from an original article by N.S. Bakhvalov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article