Difference between revisions of "Richardson extrapolation"
(Importing text file) |
Ulf Rehmann (talk | contribs) m (tex encoded by computer) |
||
Line 1: | Line 1: | ||
− | A method for accelerating the convergence of solutions of difference problems (see [[Approximation of a differential boundary value problem by difference boundary value problems|Approximation of a differential boundary value problem by difference boundary value problems]]). The principal idea of the method consists in regarding the solution | + | <!-- |
+ | r0818301.png | ||
+ | $#A+1 = 73 n = 0 | ||
+ | $#C+1 = 73 : ~/encyclopedia/old_files/data/R081/R.0801830 Richardson extrapolation | ||
+ | Automatically converted into TeX, above some diagnostics. | ||
+ | Please remove this comment and the {{TEX|auto}} line below, | ||
+ | if TeX found to be correct. | ||
+ | --> | ||
+ | |||
+ | {{TEX|auto}} | ||
+ | {{TEX|done}} | ||
+ | |||
+ | A method for accelerating the convergence of solutions of difference problems (see [[Approximation of a differential boundary value problem by difference boundary value problems|Approximation of a differential boundary value problem by difference boundary value problems]]). The principal idea of the method consists in regarding the solution $ u _ {h} ( x) $ | ||
+ | of a convergent difference problem for fixed $ x $ | ||
+ | as a function of the vanishing parameter $ h $ | ||
+ | of the difference grid, choosing the appropriate interpolation function $ \chi ( h) $ | ||
+ | based on several values of the solution $ u _ {h} ( x) $ | ||
+ | for different $ h $, | ||
+ | and calculating $ \chi ( 0) $ | ||
+ | which is an approximate value of the sought solution $ u ( x) $— | ||
+ | the limit of the sequence $ u _ {h} ( x) $ | ||
+ | as $ h \rightarrow 0 $. | ||
+ | Usually, the function $ \chi ( h) $ | ||
+ | is sought in the form of an interpolation polynomial in $ h $. | ||
The method is called after L. Richardson [[#References|[1]]] who was the first to apply it to improve the precision of the solution of difference problems and called it a deferred approach to the limit. | The method is called after L. Richardson [[#References|[1]]] who was the first to apply it to improve the precision of the solution of difference problems and called it a deferred approach to the limit. | ||
Line 5: | Line 28: | ||
The theoretical basis of the applicability of this method is the existence of an expansion | The theoretical basis of the applicability of this method is the existence of an expansion | ||
− | + | $$ | |
+ | u _ {h} ( x) = u ( x) + | ||
+ | \sum _ { i= } 1 ^ { m- } 1 | ||
+ | h ^ {\beta _ {i} } | ||
+ | v _ {i} ( x) + h ^ {B} | ||
+ | \eta _ {h} ( x) , | ||
+ | $$ | ||
− | where | + | where $ B > \beta _ {m-} 1 > \dots > \beta _ {1} > 0 $, |
+ | the functions $ v _ {i} $ | ||
+ | are independent of $ h $, | ||
+ | and $ \eta _ {h} ( x) $ | ||
+ | are the values of a grid function which is bounded when $ h \rightarrow 0 $. | ||
+ | There are several theoretical methods for finding out whether such expansions exist or not [[#References|[4]]]. | ||
− | Usually, linear extrapolation is used: By | + | Usually, linear extrapolation is used: By $ m $ |
+ | values of $ u _ {h} ( x) $ | ||
+ | at the same point $ x $ | ||
+ | for different parameters $ h = h _ {1} \dots h _ {m} $ | ||
+ | one calculates the extrapolated value $ u _ {H} ( x) $ | ||
+ | by the formula | ||
− | + | $$ | |
+ | u _ {H} ( x) = \ | ||
+ | \sum _ { k= } 1 ^ { m } \gamma _ {k} u _ {h _ {k} } ( x) , | ||
+ | $$ | ||
− | where the weights | + | where the weights $ \gamma _ {k} $ |
+ | are defined by the following system of equations: | ||
− | + | $$ | |
+ | \sum _ { k= } 1 ^ { m } \gamma _ {k} = 1 ; \ \ | ||
+ | \sum _ { k= } 1 ^ { m } \gamma _ {k} h _ {k} ^ {\beta _ {i} } = 0 ,\ \ | ||
+ | i = 1 \dots m - 1 . | ||
+ | $$ | ||
− | If among the | + | If among the $ h _ {k} $ |
+ | there are no values which are too close to each other, then | ||
− | + | $$ | |
+ | | u _ {H} ( x) - u ( x) | = O ( \overline{h}\; {} ^ {B} ) , | ||
+ | $$ | ||
− | where | + | where $ \overline{h}\; = \max _ {1 \leq k \leq m } h _ {k} $, |
+ | i.e. the order of convergence of $ u _ {H} ( x) $ | ||
+ | to $ u ( x) $ | ||
+ | as $ h \rightarrow 0 $ | ||
+ | is $ B $, | ||
+ | which is greater than $ \beta _ {1} $— | ||
+ | the order of convergence of $ u _ {h} ( x) $ | ||
+ | to $ u ( x) $. | ||
+ | In two special cases there exist algorithms for calculating $ u _ {H} ( x) $ | ||
+ | without having to determine the coefficients $ \gamma _ {k} $: | ||
− | a) in the case when | + | a) in the case when $ \beta _ {i} = i p $, |
+ | $ p > 0 $, | ||
+ | $ i = 1 \dots m - 1 $, | ||
+ | the method reduces to the interpolation of a polynomial in $ h ^ {p} $, | ||
+ | and from the [[Lagrange interpolation formula|Lagrange interpolation formula]] it follows that | ||
− | + | $$ | |
+ | u _ {H} ( x) = a _ {1} ^ {(} m- 1) , | ||
+ | $$ | ||
where | where | ||
− | + | $$ | |
+ | a _ {j} ^ {(} 0) = u _ {h _ {j} } ( x) ,\ \ | ||
+ | j = 1 \dots m ; | ||
+ | $$ | ||
+ | |||
+ | $$ \tag{* } | ||
+ | a _ {j} ^ {(} i) = a _ {j+} 1 ^ {(} i- 1) + | ||
+ | \frac{ | ||
+ | a _ {j+} 1 ^ {(} i- 1) - a _ {j} ^ {(} i- 1) }{( h _ {j} / h _ {i+} j ) ^ {p} - 1 } | ||
+ | , | ||
+ | $$ | ||
− | + | $$ | |
+ | j = 1 \dots m - i ,\ i = 1 \dots m- 1; | ||
+ | $$ | ||
− | + | b) in the case when $ h _ {i} = h _ {0} b ^ {i} $, | |
+ | $ 0 < b < 1 $, | ||
+ | $ i = 1 \dots m - 1 $, | ||
+ | formula (*) is replaced by | ||
− | + | $$ | |
+ | a _ {j} ^ {(} i) = \ | ||
+ | a _ {j+} 1 ^ {(} i- 1) + | ||
− | + | \frac{a _ {j+} 1 ^ {(} i- 1) - a _ {j} ^ {(} i- 1) }{b ^ {\beta _ {i} } - 1 } | |
+ | . | ||
+ | $$ | ||
− | This algorithm, called Romberg's rule (W. Romberg, 1955), is widely applied in the construction of quadrature formulas (see [[#References|[5]]]). In order that for different | + | This algorithm, called Romberg's rule (W. Romberg, 1955), is widely applied in the construction of quadrature formulas (see [[#References|[5]]]). In order that for different $ h _ {i} $ |
+ | the grids $ D _ {h _ {i} U } $( | ||
+ | see [[Approximation of a differential operator by difference operators|Approximation of a differential operator by difference operators]]) have as many nodes as possible to carry out the Richardson extrapolation, the parameters $ h _ {i} $ | ||
+ | are chosen as part of one of the following sequences: $ h _ {i} = h _ {0} / i $, | ||
+ | $ i = 1 , 2 ,\dots $; | ||
+ | $ h _ {i} = h _ {0} 2 ^ {1-} i $, | ||
+ | $ i = 1 , 2 ,\dots $; | ||
+ | $ h _ {0} , h _ {0} / 2 , h _ {0} / 3 , h _ {0} / 4 , h _ {0} / 6 , h _ {0} / 8 , h _ {0} / 12 ,\dots $. | ||
− | Linear extrapolation is not the only possible way. For instance, in case | + | Linear extrapolation is not the only possible way. For instance, in case $ \beta _ {i} = i p $, |
+ | $ p > 0 $, | ||
+ | as the interpolation function $ \chi ( h) $ | ||
+ | one can take rational functions of the form $ \phi ( h ^ {p} ) / \psi ( h ^ {p} ) $, | ||
+ | where $ \phi ( t) $, | ||
+ | $ \psi ( t) $ | ||
+ | are polynomials in $ t $ | ||
+ | of degrees $ [ ( m- 1)/2 ] $ | ||
+ | and $ [ m/2] $, | ||
+ | respectively. Then the result of the rational extrapolation $ u _ {H} ( x) = \chi ( 0) $ | ||
+ | can be calculated by the following recurrent procedure: | ||
− | + | $$ | |
+ | d _ {j} ^ {(-} 1) = 0 ,\ \ | ||
+ | j = 2 \dots m ; | ||
+ | $$ | ||
− | + | $$ | |
+ | d _ {j} ^ {(} 0) = u _ {h _ {j} } ( x) ,\ j = 1 \dots m ; | ||
+ | $$ | ||
− | + | $$ | |
+ | d _ {j} ^ {(} i) = d _ {j+} 1 ^ {(} i- 1) + ( | ||
+ | d _ {j+} 1 ^ {(} i- 1) - d _ {j} ^ {(} i- 1) ) \times | ||
+ | $$ | ||
− | + | $$ | |
+ | \times | ||
+ | \left \{ \left ( | ||
− | + | ||
+ | \frac{h _ j}{h _ i+} | ||
+ | j \right ) ^ {p} \left [ 1 | ||
+ | - | ||
+ | \frac{d _ {j+} 1 ^ {(} i- 1) - d _ {j} ^ {(} i- 1) }{d _ {j+} 1 ^ {(} i- 1) - d _ {j+} 1 ^ {(} i- 1) } | ||
+ | \right ] - 1 \right \} ^ {-} 1 , | ||
+ | $$ | ||
+ | |||
+ | $$ | ||
+ | j = 1 \dots m - i ,\ i = 1 \dots m- 1 ; \ u _ {H} ( x) = d _ {1} ^ {(} m- 1) . | ||
+ | $$ | ||
Richardson extrapolation is conveniently realized on a computer, since in order to achieve high accuracy it uses the repeated solution of simple difference problems (sometimes with little modifications) of low order of approximation for which standard methods of solution and computer programs are usually well developed. | Richardson extrapolation is conveniently realized on a computer, since in order to achieve high accuracy it uses the repeated solution of simple difference problems (sometimes with little modifications) of low order of approximation for which standard methods of solution and computer programs are usually well developed. | ||
Line 57: | Line 178: | ||
====References==== | ====References==== | ||
<table><TR><TD valign="top">[1]</TD> <TD valign="top"> L.F. Richardson, "The approximate arithmetical solution by finite differences of physical problems involving differential equations, with an application to the stress in a masonry dam" ''Philos. Trans. Roy. Soc. Ser. A'' , '''210''' (1910) pp. 307–357</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top"> R. Bulirsh, J. Stoer, "Fehlerabschätzungen und Extrapolation mit rationaler Funktionen bei Verfahren vom Richardson-Typus" ''Numer. Math.'' , '''6''' : 5 (1964) pp. 413–427</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top"> D.C. Joyce, "Survey of extrapolation processes in numerical analysis" ''SIAM Review'' , '''13''' : 4 (1971) pp. 435–490</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top"> G.I. Marchuk, V.V. Shaidurov, "Difference methods and their extrapolations" , Springer (1983) (Translated from Russian)</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top"> N.S. Bakhvalov, "Numerical methods: analysis, algebra, ordinary differential equations" , MIR (1977) (Translated from Russian)</TD></TR></table> | <table><TR><TD valign="top">[1]</TD> <TD valign="top"> L.F. Richardson, "The approximate arithmetical solution by finite differences of physical problems involving differential equations, with an application to the stress in a masonry dam" ''Philos. Trans. Roy. Soc. Ser. A'' , '''210''' (1910) pp. 307–357</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top"> R. Bulirsh, J. Stoer, "Fehlerabschätzungen und Extrapolation mit rationaler Funktionen bei Verfahren vom Richardson-Typus" ''Numer. Math.'' , '''6''' : 5 (1964) pp. 413–427</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top"> D.C. Joyce, "Survey of extrapolation processes in numerical analysis" ''SIAM Review'' , '''13''' : 4 (1971) pp. 435–490</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top"> G.I. Marchuk, V.V. Shaidurov, "Difference methods and their extrapolations" , Springer (1983) (Translated from Russian)</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top"> N.S. Bakhvalov, "Numerical methods: analysis, algebra, ordinary differential equations" , MIR (1977) (Translated from Russian)</TD></TR></table> | ||
− | |||
− | |||
====Comments==== | ====Comments==== | ||
− | |||
====References==== | ====References==== | ||
<table><TR><TD valign="top">[a1]</TD> <TD valign="top"> T. Hivie, "Generalized Neville type extrapolation schemes" ''BIT'' , '''9''' (1979) pp. 204–213</TD></TR></table> | <table><TR><TD valign="top">[a1]</TD> <TD valign="top"> T. Hivie, "Generalized Neville type extrapolation schemes" ''BIT'' , '''9''' (1979) pp. 204–213</TD></TR></table> |
Revision as of 08:11, 6 June 2020
A method for accelerating the convergence of solutions of difference problems (see Approximation of a differential boundary value problem by difference boundary value problems). The principal idea of the method consists in regarding the solution $ u _ {h} ( x) $
of a convergent difference problem for fixed $ x $
as a function of the vanishing parameter $ h $
of the difference grid, choosing the appropriate interpolation function $ \chi ( h) $
based on several values of the solution $ u _ {h} ( x) $
for different $ h $,
and calculating $ \chi ( 0) $
which is an approximate value of the sought solution $ u ( x) $—
the limit of the sequence $ u _ {h} ( x) $
as $ h \rightarrow 0 $.
Usually, the function $ \chi ( h) $
is sought in the form of an interpolation polynomial in $ h $.
The method is called after L. Richardson [1] who was the first to apply it to improve the precision of the solution of difference problems and called it a deferred approach to the limit.
The theoretical basis of the applicability of this method is the existence of an expansion
$$ u _ {h} ( x) = u ( x) + \sum _ { i= } 1 ^ { m- } 1 h ^ {\beta _ {i} } v _ {i} ( x) + h ^ {B} \eta _ {h} ( x) , $$
where $ B > \beta _ {m-} 1 > \dots > \beta _ {1} > 0 $, the functions $ v _ {i} $ are independent of $ h $, and $ \eta _ {h} ( x) $ are the values of a grid function which is bounded when $ h \rightarrow 0 $. There are several theoretical methods for finding out whether such expansions exist or not [4].
Usually, linear extrapolation is used: By $ m $ values of $ u _ {h} ( x) $ at the same point $ x $ for different parameters $ h = h _ {1} \dots h _ {m} $ one calculates the extrapolated value $ u _ {H} ( x) $ by the formula
$$ u _ {H} ( x) = \ \sum _ { k= } 1 ^ { m } \gamma _ {k} u _ {h _ {k} } ( x) , $$
where the weights $ \gamma _ {k} $ are defined by the following system of equations:
$$ \sum _ { k= } 1 ^ { m } \gamma _ {k} = 1 ; \ \ \sum _ { k= } 1 ^ { m } \gamma _ {k} h _ {k} ^ {\beta _ {i} } = 0 ,\ \ i = 1 \dots m - 1 . $$
If among the $ h _ {k} $ there are no values which are too close to each other, then
$$ | u _ {H} ( x) - u ( x) | = O ( \overline{h}\; {} ^ {B} ) , $$
where $ \overline{h}\; = \max _ {1 \leq k \leq m } h _ {k} $, i.e. the order of convergence of $ u _ {H} ( x) $ to $ u ( x) $ as $ h \rightarrow 0 $ is $ B $, which is greater than $ \beta _ {1} $— the order of convergence of $ u _ {h} ( x) $ to $ u ( x) $. In two special cases there exist algorithms for calculating $ u _ {H} ( x) $ without having to determine the coefficients $ \gamma _ {k} $:
a) in the case when $ \beta _ {i} = i p $, $ p > 0 $, $ i = 1 \dots m - 1 $, the method reduces to the interpolation of a polynomial in $ h ^ {p} $, and from the Lagrange interpolation formula it follows that
$$ u _ {H} ( x) = a _ {1} ^ {(} m- 1) , $$
where
$$ a _ {j} ^ {(} 0) = u _ {h _ {j} } ( x) ,\ \ j = 1 \dots m ; $$
$$ \tag{* } a _ {j} ^ {(} i) = a _ {j+} 1 ^ {(} i- 1) + \frac{ a _ {j+} 1 ^ {(} i- 1) - a _ {j} ^ {(} i- 1) }{( h _ {j} / h _ {i+} j ) ^ {p} - 1 } , $$
$$ j = 1 \dots m - i ,\ i = 1 \dots m- 1; $$
b) in the case when $ h _ {i} = h _ {0} b ^ {i} $, $ 0 < b < 1 $, $ i = 1 \dots m - 1 $, formula (*) is replaced by
$$ a _ {j} ^ {(} i) = \ a _ {j+} 1 ^ {(} i- 1) + \frac{a _ {j+} 1 ^ {(} i- 1) - a _ {j} ^ {(} i- 1) }{b ^ {\beta _ {i} } - 1 } . $$
This algorithm, called Romberg's rule (W. Romberg, 1955), is widely applied in the construction of quadrature formulas (see [5]). In order that for different $ h _ {i} $ the grids $ D _ {h _ {i} U } $( see Approximation of a differential operator by difference operators) have as many nodes as possible to carry out the Richardson extrapolation, the parameters $ h _ {i} $ are chosen as part of one of the following sequences: $ h _ {i} = h _ {0} / i $, $ i = 1 , 2 ,\dots $; $ h _ {i} = h _ {0} 2 ^ {1-} i $, $ i = 1 , 2 ,\dots $; $ h _ {0} , h _ {0} / 2 , h _ {0} / 3 , h _ {0} / 4 , h _ {0} / 6 , h _ {0} / 8 , h _ {0} / 12 ,\dots $.
Linear extrapolation is not the only possible way. For instance, in case $ \beta _ {i} = i p $, $ p > 0 $, as the interpolation function $ \chi ( h) $ one can take rational functions of the form $ \phi ( h ^ {p} ) / \psi ( h ^ {p} ) $, where $ \phi ( t) $, $ \psi ( t) $ are polynomials in $ t $ of degrees $ [ ( m- 1)/2 ] $ and $ [ m/2] $, respectively. Then the result of the rational extrapolation $ u _ {H} ( x) = \chi ( 0) $ can be calculated by the following recurrent procedure:
$$ d _ {j} ^ {(-} 1) = 0 ,\ \ j = 2 \dots m ; $$
$$ d _ {j} ^ {(} 0) = u _ {h _ {j} } ( x) ,\ j = 1 \dots m ; $$
$$ d _ {j} ^ {(} i) = d _ {j+} 1 ^ {(} i- 1) + ( d _ {j+} 1 ^ {(} i- 1) - d _ {j} ^ {(} i- 1) ) \times $$
$$ \times \left \{ \left ( \frac{h _ j}{h _ i+} j \right ) ^ {p} \left [ 1 - \frac{d _ {j+} 1 ^ {(} i- 1) - d _ {j} ^ {(} i- 1) }{d _ {j+} 1 ^ {(} i- 1) - d _ {j+} 1 ^ {(} i- 1) } \right ] - 1 \right \} ^ {-} 1 , $$
$$ j = 1 \dots m - i ,\ i = 1 \dots m- 1 ; \ u _ {H} ( x) = d _ {1} ^ {(} m- 1) . $$
Richardson extrapolation is conveniently realized on a computer, since in order to achieve high accuracy it uses the repeated solution of simple difference problems (sometimes with little modifications) of low order of approximation for which standard methods of solution and computer programs are usually well developed.
References
[1] | L.F. Richardson, "The approximate arithmetical solution by finite differences of physical problems involving differential equations, with an application to the stress in a masonry dam" Philos. Trans. Roy. Soc. Ser. A , 210 (1910) pp. 307–357 |
[2] | R. Bulirsh, J. Stoer, "Fehlerabschätzungen und Extrapolation mit rationaler Funktionen bei Verfahren vom Richardson-Typus" Numer. Math. , 6 : 5 (1964) pp. 413–427 |
[3] | D.C. Joyce, "Survey of extrapolation processes in numerical analysis" SIAM Review , 13 : 4 (1971) pp. 435–490 |
[4] | G.I. Marchuk, V.V. Shaidurov, "Difference methods and their extrapolations" , Springer (1983) (Translated from Russian) |
[5] | N.S. Bakhvalov, "Numerical methods: analysis, algebra, ordinary differential equations" , MIR (1977) (Translated from Russian) |
Comments
References
[a1] | T. Hivie, "Generalized Neville type extrapolation schemes" BIT , 9 (1979) pp. 204–213 |
Richardson extrapolation. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Richardson_extrapolation&oldid=48539