Difference between revisions of "Rosenbrock methods"
(Importing text file) |
Ulf Rehmann (talk | contribs) m (tex encoded by computer) |
||
Line 1: | Line 1: | ||
+ | <!-- | ||
+ | r1101501.png | ||
+ | $#A+1 = 28 n = 0 | ||
+ | $#C+1 = 28 : ~/encyclopedia/old_files/data/R110/R.1100150 Rosenbrock methods | ||
+ | Automatically converted into TeX, above some diagnostics. | ||
+ | Please remove this comment and the {{TEX|auto}} line below, | ||
+ | if TeX found to be correct. | ||
+ | --> | ||
+ | |||
+ | {{TEX|auto}} | ||
+ | {{TEX|done}} | ||
+ | |||
A numerical integration method of the Runge–Kutta type for stiff systems of ordinary differential equations (cf. also [[Runge–Kutta method|Runge–Kutta method]]; [[Stiff differential system|Stiff differential system]]). For Rosenbrock methods these systems are usually put in the autonomous form | A numerical integration method of the Runge–Kutta type for stiff systems of ordinary differential equations (cf. also [[Runge–Kutta method|Runge–Kutta method]]; [[Stiff differential system|Stiff differential system]]). For Rosenbrock methods these systems are usually put in the autonomous form | ||
− | + | $$ \tag{a1 } | |
+ | {\dot{y} } = f ( y ) , t > t _ {0} , y ( t _ {0} ) = y _ {0} . | ||
+ | $$ | ||
− | The property of stiffness, which means that the solution | + | The property of stiffness, which means that the solution $ y $ |
+ | is composed of components possessing widely differing time constants, impedes some form of implicitness for reasons of numerical stability [[#References|[a1]]], [[#References|[a2]]]. The most well-known implicit method for stiff initial-value problems is the backward [[Euler method|Euler method]] | ||
− | + | $$ \tag{a2 } | |
+ | y _ {n + 1 } = y _ {n} + \tau f ( y _ {n + 1 } ) , | ||
+ | $$ | ||
− | where | + | where $ \tau = t _ {n + 1 } - t _ {n} $ |
+ | is the step size and $ y _ {n} $ | ||
+ | the approximation to $ y ( t ) $ | ||
+ | at time $ t = t _ {n} $. | ||
+ | Since $ y _ {n + 1 } $ | ||
+ | is defined implicitly, this numerical solution itself must also be approximated. Mostly the iterative [[Newton method|Newton method]] or a modification thereof is used, again for reasons of numerical stability. However, as the Euler method has only order of consistency $ p = 1 $, | ||
+ | it makes no sense to carry out the iteration to high accuracy. In practice it often is sufficient to apply just one iteration per time step. If one then assumes that $ y _ {n} $ | ||
+ | is used as initial iterate, the following numerical result is found: | ||
− | + | $$ \tag{a3 } | |
+ | y _ {n + 1 } = y _ {n} + k, \quad k = \tau f ( y _ {n} ) + \tau J k, | ||
+ | $$ | ||
− | where | + | where $ I $ |
+ | is the unit matrix and $ J = f ^ \prime ( y _ {n} ) $ | ||
+ | denotes the [[Jacobian|Jacobian]] matrix of the vector function $ f $ | ||
+ | at $ y _ {n} $. | ||
− | Of interest is that now the numerical solution is computed by solving a system of linear algebraic equations, rather than a system of non-linear equations. H.H. Rosenbrock, in his landmark paper [[#References|[a3]]] of 1963, has proposed to generalize this linearly implicit approach to methods using more stages so as to achieve a higher order of consistency | + | Of interest is that now the numerical solution is computed by solving a system of linear algebraic equations, rather than a system of non-linear equations. H.H. Rosenbrock, in his landmark paper [[#References|[a3]]] of 1963, has proposed to generalize this linearly implicit approach to methods using more stages so as to achieve a higher order of consistency $ p $. |
+ | The crucial consideration hereby is that no longer the iterative Newton method is used, but instead stable formulas are derived by working the Jacobian matrix directly into the integration formula. His idea has found widespread use. A generally accepted form in the literature for a so-called $ s $- | ||
+ | stage Rosenbrock method is (cf. [[#References|[a1]]], [[#References|[a2]]]): | ||
− | + | $$ \tag{a4 } | |
+ | y _ {n + 1 } = y _ {n} + \sum _ {i = 1 } ^ { s } b _ {i} k _ {i} , | ||
+ | $$ | ||
− | + | $$ | |
+ | k _ {i} = \tau f \left ( y _ {n} + \sum _ {j = 1 } ^ { {i } - 1 } \alpha _ {ij } k _ {j} \right ) + \tau J \sum _ {j = 1 } ^ { i } \gamma _ {ij } k _ {j} . | ||
+ | $$ | ||
− | Assuming that | + | Assuming that $ \gamma _ {ii } = \gamma $, |
+ | hence equal for all stages, then one time step costs one Jacobian evaluation, one LU-decomposition of the matrix $ I - \tau \gamma J $ | ||
+ | and $ s $ | ||
+ | backsolves accompanied by $ s $ | ||
+ | function evaluations. Per step, Rosenbrock methods are therefore computationally expensive. Yet they are attractive since they are of one-step type, can be made A-stable or L-stable (cf. also [[Stability|Stability]]) and are easy to use since no iteration procedure as for genuinely implicit methods is required. | ||
− | Rosenbrock methods are also called Runge–Kutta–Rosenbrock methods. If one puts | + | Rosenbrock methods are also called Runge–Kutta–Rosenbrock methods. If one puts $ J = 0 $, |
+ | a classical explicit [[Runge–Kutta method|Runge–Kutta method]] results. Rosenbrock methods have been shown very valuable in practice. A good example is provided by the FORTRAN solver RODAS from [[#References|[a2]]]. The underlying method of this solver uses $ 6 $ | ||
+ | stages, is L-stable and stiffly accurate and of order $ p = 4 $. | ||
+ | This solver is also applicable to linearly implicit systems of the form $ M {\dot{y} } = f ( y ) $. | ||
+ | In the literature, variants of the Rosenbrock method are discussed in which the Jacobian matrix $ J $ | ||
+ | is frozen for a number of time steps or even replaced by an approximation which renders the linear system solution cheaper (see [[#References|[a1]]], [[#References|[a2]]]). For very large-scale problems this is obviously of practical importance as it can reduce CPU time considerably. | ||
====References==== | ====References==== | ||
<table><TR><TD valign="top">[a1]</TD> <TD valign="top"> K. Dekker, J.G. Verwer, "Stability of Runge–Kutta methods for stiff nonlinear differential equations" , North-Holland (1984)</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top"> E. Hairer, G. Wanner, "Solving ordinary differential equations" , '''II: stiff and differential-algebraic problems''' , Springer (1991)</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top"> H.H. Rosenbrock, "Some general implicit processes for the numerical solution of differential equations" ''Comput. J.'' , '''5''' (1963) pp. 329–330</TD></TR></table> | <table><TR><TD valign="top">[a1]</TD> <TD valign="top"> K. Dekker, J.G. Verwer, "Stability of Runge–Kutta methods for stiff nonlinear differential equations" , North-Holland (1984)</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top"> E. Hairer, G. Wanner, "Solving ordinary differential equations" , '''II: stiff and differential-algebraic problems''' , Springer (1991)</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top"> H.H. Rosenbrock, "Some general implicit processes for the numerical solution of differential equations" ''Comput. J.'' , '''5''' (1963) pp. 329–330</TD></TR></table> |
Latest revision as of 08:12, 6 June 2020
A numerical integration method of the Runge–Kutta type for stiff systems of ordinary differential equations (cf. also Runge–Kutta method; Stiff differential system). For Rosenbrock methods these systems are usually put in the autonomous form
$$ \tag{a1 } {\dot{y} } = f ( y ) , t > t _ {0} , y ( t _ {0} ) = y _ {0} . $$
The property of stiffness, which means that the solution $ y $ is composed of components possessing widely differing time constants, impedes some form of implicitness for reasons of numerical stability [a1], [a2]. The most well-known implicit method for stiff initial-value problems is the backward Euler method
$$ \tag{a2 } y _ {n + 1 } = y _ {n} + \tau f ( y _ {n + 1 } ) , $$
where $ \tau = t _ {n + 1 } - t _ {n} $ is the step size and $ y _ {n} $ the approximation to $ y ( t ) $ at time $ t = t _ {n} $. Since $ y _ {n + 1 } $ is defined implicitly, this numerical solution itself must also be approximated. Mostly the iterative Newton method or a modification thereof is used, again for reasons of numerical stability. However, as the Euler method has only order of consistency $ p = 1 $, it makes no sense to carry out the iteration to high accuracy. In practice it often is sufficient to apply just one iteration per time step. If one then assumes that $ y _ {n} $ is used as initial iterate, the following numerical result is found:
$$ \tag{a3 } y _ {n + 1 } = y _ {n} + k, \quad k = \tau f ( y _ {n} ) + \tau J k, $$
where $ I $ is the unit matrix and $ J = f ^ \prime ( y _ {n} ) $ denotes the Jacobian matrix of the vector function $ f $ at $ y _ {n} $.
Of interest is that now the numerical solution is computed by solving a system of linear algebraic equations, rather than a system of non-linear equations. H.H. Rosenbrock, in his landmark paper [a3] of 1963, has proposed to generalize this linearly implicit approach to methods using more stages so as to achieve a higher order of consistency $ p $. The crucial consideration hereby is that no longer the iterative Newton method is used, but instead stable formulas are derived by working the Jacobian matrix directly into the integration formula. His idea has found widespread use. A generally accepted form in the literature for a so-called $ s $- stage Rosenbrock method is (cf. [a1], [a2]):
$$ \tag{a4 } y _ {n + 1 } = y _ {n} + \sum _ {i = 1 } ^ { s } b _ {i} k _ {i} , $$
$$ k _ {i} = \tau f \left ( y _ {n} + \sum _ {j = 1 } ^ { {i } - 1 } \alpha _ {ij } k _ {j} \right ) + \tau J \sum _ {j = 1 } ^ { i } \gamma _ {ij } k _ {j} . $$
Assuming that $ \gamma _ {ii } = \gamma $, hence equal for all stages, then one time step costs one Jacobian evaluation, one LU-decomposition of the matrix $ I - \tau \gamma J $ and $ s $ backsolves accompanied by $ s $ function evaluations. Per step, Rosenbrock methods are therefore computationally expensive. Yet they are attractive since they are of one-step type, can be made A-stable or L-stable (cf. also Stability) and are easy to use since no iteration procedure as for genuinely implicit methods is required.
Rosenbrock methods are also called Runge–Kutta–Rosenbrock methods. If one puts $ J = 0 $, a classical explicit Runge–Kutta method results. Rosenbrock methods have been shown very valuable in practice. A good example is provided by the FORTRAN solver RODAS from [a2]. The underlying method of this solver uses $ 6 $ stages, is L-stable and stiffly accurate and of order $ p = 4 $. This solver is also applicable to linearly implicit systems of the form $ M {\dot{y} } = f ( y ) $. In the literature, variants of the Rosenbrock method are discussed in which the Jacobian matrix $ J $ is frozen for a number of time steps or even replaced by an approximation which renders the linear system solution cheaper (see [a1], [a2]). For very large-scale problems this is obviously of practical importance as it can reduce CPU time considerably.
References
[a1] | K. Dekker, J.G. Verwer, "Stability of Runge–Kutta methods for stiff nonlinear differential equations" , North-Holland (1984) |
[a2] | E. Hairer, G. Wanner, "Solving ordinary differential equations" , II: stiff and differential-algebraic problems , Springer (1991) |
[a3] | H.H. Rosenbrock, "Some general implicit processes for the numerical solution of differential equations" Comput. J. , 5 (1963) pp. 329–330 |
Rosenbrock methods. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Rosenbrock_methods&oldid=18016