Difference between revisions of "Stiff differential system"
Ulf Rehmann (talk | contribs) m (tex encoded by computer) |
Ulf Rehmann (talk | contribs) m (Undo revision 48842 by Ulf Rehmann (talk)) Tag: Undo |
||
Line 1: | Line 1: | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
A system of ordinary differential equations in the numerical solution of which by explicit methods of Runge–Kutta or Adams type, the integration step has to remain small despite the slow change in the desired variables. Attempts to reduce the time for calculating the solution of a stiff differential system at the expense of increasing the integration step lead to a sharp increase in the error (error explosion). | A system of ordinary differential equations in the numerical solution of which by explicit methods of Runge–Kutta or Adams type, the integration step has to remain small despite the slow change in the desired variables. Attempts to reduce the time for calculating the solution of a stiff differential system at the expense of increasing the integration step lead to a sharp increase in the error (error explosion). | ||
An autonomous system of ordinary differential equations | An autonomous system of ordinary differential equations | ||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s0878201.png" /></td> <td valign="top" style="width:5%;text-align:right;">(1)</td></tr></table> | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | is said to be stiff if, for any initial values | + | is said to be stiff if, for any initial values <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s0878202.png" />, the following conditions are satisfied on a given interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s0878203.png" /> contained in the interval of existence of a solution of (1): |
− | the following conditions are satisfied on a given interval | ||
− | contained in the interval of existence of a solution of (1): | ||
− | a) the maximum modulus of the eigenvalues of the Jacobi matrix (the spectral radius) is bounded along the solution | + | a) the maximum modulus of the eigenvalues of the Jacobi matrix (the spectral radius) is bounded along the solution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s0878204.png" />: |
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s0878205.png" /></td> </tr></table> | |
− | 0 | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s0878206.png" /></td> </tr></table> | |
− | = | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | b) there exist numbers | + | b) there exist numbers <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s0878207.png" /> such that if |
− | such that if | ||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s0878208.png" /></td> </tr></table> | |
− | 0 | ||
− | |||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s0878209.png" /></td> </tr></table> | |
− | 0 | ||
− | |||
then the inequality | then the inequality | ||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782010.png" /></td> </tr></table> | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
is satisfied; here | is satisfied; here | ||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782011.png" /></td> </tr></table> | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | + | <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782012.png" /> is the [[Fundamental matrix|fundamental matrix]] of the system (1), | |
− | |||
− | |||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782013.png" /></td> </tr></table> | |
− | |||
− | |||
− | + | and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782014.png" /> is the length of the boundary layer. All systems of type (1) for which the conditions a) and b) are satisfied simultaneously after scaling the components of the vectors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782015.png" /> for each solution, are called stiff. | |
− | is | ||
− | |||
− | + | A non-autonomous normal system of ordinary differential equations of order <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782016.png" /> is said to be stiff if the autonomous system of order <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782017.png" /> equivalent to it is stiff. The following scalar equation is an example of a stiff non-autonomous equation: | |
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782018.png" /></td> <td valign="top" style="width:5%;text-align:right;">(2)</td></tr></table> | |
− | |||
− | + | where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782019.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782020.png" />, is a given function. Another example of a stiff differential system is the linear homogeneous system | |
− | |||
− | |||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782021.png" /></td> <td valign="top" style="width:5%;text-align:right;">(3)</td></tr></table> | |
− | |||
− | |||
− | + | with constant <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782022.png" />-matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782023.png" /> having different eigenvalues divided into two groups: | |
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782024.png" /></td> <td valign="top" style="width:5%;text-align:right;">(4)</td></tr></table> | |
− | |||
− | |||
− | |||
− | + | Condition a) is obviously satisfied for (3). The norm is bounded above by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782025.png" /> for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782026.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782027.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782028.png" /> are certain constants. When <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782029.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782030.png" />, condition b) is satisfied. The vector <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782031.png" /> is bounded above if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782032.png" />: | |
− | |||
− | |||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782033.png" /></td> </tr></table> | |
− | |||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782034.png" /></td> </tr></table> | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | + | Thus it is possible to construct for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782035.png" /> an algebraic interpolation polynomial of degree zero when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782036.png" />, with uniformly-distributed nodes, and to express the interpolation step in the form <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782037.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782038.png" /> is some constant (see [[#References|[1]]]) depending on the given error. On the other hand, use of Euler's polygonal method for (3) requires an upper bound on the size of the integration step on the entire interval of a solution of the system (see [[#References|[1]]]): | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782039.png" /></td> <td valign="top" style="width:5%;text-align:right;">(5)</td></tr></table> | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782040.png" /></td> </tr></table> | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
Here the components of the approximate solution to the system (3) by Euler's polygonal method corresponding to the first group of eigenvalues (4) will be represented with sufficient accuracy (see [[#References|[1]]]): | Here the components of the approximate solution to the system (3) by Euler's polygonal method corresponding to the first group of eigenvalues (4) will be represented with sufficient accuracy (see [[#References|[1]]]): | ||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782041.png" /></td> </tr></table> | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | Restrictions of the form (5) on the integration step are characteristic for extrapolation methods of Runge–Kutta or Adams type. The quotient | + | Restrictions of the form (5) on the integration step are characteristic for extrapolation methods of Runge–Kutta or Adams type. The quotient <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782042.png" />, which can be regarded as a qualitative measure of the stiffness of the system (3), attains values of order <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782043.png" />–<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782044.png" /> in many cases. The mathematical descriptions of dynamical processes and phenomena in physics, chemistry, biology, technology, and economics connected with the calculation of an ever larger number of factors raising the order of the differential systems, lead to an increase in stiffness. Stiff differential systems require special methods of solution. |
− | which can be regarded as a qualitative measure of the stiffness of the system (3), attains values of order | ||
− | |||
− | in many cases. The mathematical descriptions of dynamical processes and phenomena in physics, chemistry, biology, technology, and economics connected with the calculation of an ever larger number of factors raising the order of the differential systems, lead to an increase in stiffness. Stiff differential systems require special methods of solution. | ||
In certain cases, the original system (1) can be transformed using the theory of, and asymptotic methods for, differential equations with a small parameter in front of the leading derivatives, the behaviour of the solution of which in the boundary layer is described in terms of exponentially-damped functions (see [[#References|[2]]]). However, it is usually difficult to reduce a stiff differential system to a similar form, and, moreover, the stiffness is not always essentially decreased after asymptotic transformation. Thus, numerical methods are used to solve systems of a general form. The main property of these methods is that they guarantee a suppression of the rapidly-damping components of the solution of (1) outside the boundary layer for a value of the integration step used near to that of the algebraic interpolation step. | In certain cases, the original system (1) can be transformed using the theory of, and asymptotic methods for, differential equations with a small parameter in front of the leading derivatives, the behaviour of the solution of which in the boundary layer is described in terms of exponentially-damped functions (see [[#References|[2]]]). However, it is usually difficult to reduce a stiff differential system to a similar form, and, moreover, the stiffness is not always essentially decreased after asymptotic transformation. Thus, numerical methods are used to solve systems of a general form. The main property of these methods is that they guarantee a suppression of the rapidly-damping components of the solution of (1) outside the boundary layer for a value of the integration step used near to that of the algebraic interpolation step. | ||
Line 185: | Line 67: | ||
It is appropriate to use implicit methods (see [[#References|[9]]], [[#References|[17]]]) in the solution of stiff differential systems. To construct implicit schemes, the method of undetermined coefficients can be used. On the basis of Taylor's formula it can be written in the form | It is appropriate to use implicit methods (see [[#References|[9]]], [[#References|[17]]]) in the solution of stiff differential systems. To construct implicit schemes, the method of undetermined coefficients can be used. On the basis of Taylor's formula it can be written in the form | ||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782045.png" /></td> <td valign="top" style="width:5%;text-align:right;">(6)</td></tr></table> | |
− | |||
− | |||
− | |||
− | |||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782046.png" /></td> </tr></table> | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | where | + | where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782047.png" /> is a positive integer, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782048.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782049.png" />. For example, such methods are described in [[#References|[9]]]: |
− | is a positive integer, | ||
− | |||
− | For example, such methods are described in [[#References|[9]]]: | ||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782050.png" /></td> <td valign="top" style="width:5%;text-align:right;">(7)</td></tr></table> | |
− | |||
− | |||
− | |||
− | |||
− | + | <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782051.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782052.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782053.png" /> — the implicit polygonal method; | |
− | |||
− | |||
− | the implicit polygonal method; | ||
− | + | <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782054.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782055.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782056.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782057.png" /> — the implicit <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782058.png" />-nd order method; | |
− | |||
− | |||
− | |||
− | the implicit | ||
− | nd order method; | ||
− | + | <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782059.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782060.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782061.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782062.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782063.png" /> — the implicit <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782064.png" />-rd order method; | |
− | |||
− | |||
− | |||
− | |||
− | the implicit | ||
− | rd order method; | ||
− | + | <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782065.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782066.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782067.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782068.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782069.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782070.png" /> — the implicit 4th order method. | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | the implicit 4th order method. | ||
− | By the order of a method is understood the highest power | + | By the order of a method is understood the highest power <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782071.png" /> in the expansion (7) in powers of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782072.png" /> whose coefficient is the same as the corresponding coefficient in (6). |
− | in the expansion (7) in powers of | ||
− | whose coefficient is the same as the corresponding coefficient in (6). | ||
Application of the implicit polygonal method to the system (3) leads to the difference equations | Application of the implicit polygonal method to the system (3) leads to the difference equations | ||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782073.png" /></td> <td valign="top" style="width:5%;text-align:right;">(8)</td></tr></table> | |
− | |||
− | |||
− | |||
− | Suppose that the system (3) is asymptotically Lyapunov stable. Then the matrix | + | Suppose that the system (3) is asymptotically Lyapunov stable. Then the matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782074.png" /> is non-singular for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782075.png" />. The Lagrange–Sylvester formula can be used to represent the solution of (8) in the form |
− | is non-singular for all | ||
− | The Lagrange–Sylvester formula can be used to represent the solution of (8) in the form | ||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782076.png" /></td> <td valign="top" style="width:5%;text-align:right;">(9)</td></tr></table> | |
− | |||
− | |||
− | |||
− | |||
− | |||
For the implicit polygonal method (8), the condition of asymptotic stability, | For the implicit polygonal method (8), the condition of asymptotic stability, | ||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782077.png" /></td> </tr></table> | |
− | |||
− | |||
− | |||
− | |||
− | is satisfied for all | + | is satisfied for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782078.png" />, and in (9) the components of the solution corresponding to the second group of eigenvalues (4) will be rapidly decreasing with increasing <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782079.png" />. The value of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782080.png" /> is restricted only by the requirements of the desired accuracy of the approximate solutions. The tendency to raise the order of linear multi-step methods leads to a definite contradiction to their stability (see [[#References|[11]]]). |
− | and in (9) the components of the solution corresponding to the second group of eigenvalues (4) will be rapidly decreasing with increasing | ||
− | The value of | ||
− | is restricted only by the requirements of the desired accuracy of the approximate solutions. The tendency to raise the order of linear multi-step methods leads to a definite contradiction to their stability (see [[#References|[11]]]). | ||
− | The | + | The <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782081.png" />-step method |
− | step method | ||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782082.png" /></td> <td valign="top" style="width:5%;text-align:right;">(10)</td></tr></table> | |
− | |||
− | |||
− | is said to be | + | is said to be <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782085.png" />-stable if all solutions of (10) tend to zero as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782086.png" /> for a fixed positive <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782087.png" /> in the case when (10) is applied to the scalar equation |
− | stable if all solutions of (10) tend to zero as | ||
− | for a fixed positive | ||
− | in the case when (10) is applied to the scalar equation | ||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782088.png" /></td> <td valign="top" style="width:5%;text-align:right;">(11)</td></tr></table> | |
− | + | with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782089.png" /> a complex constant with negative real part. An explicit <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782090.png" />-step method cannot be <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782091.png" />-stable. The order of a linear multi-step <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782092.png" />-stable method cannot exceed two. The smallest error constant <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782093.png" /> is obtained for the implicit trapezium method (see [[#References|[11]]]): | |
− | |||
− | |||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782094.png" /></td> <td valign="top" style="width:5%;text-align:right;">(12)</td></tr></table> | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | + | The requirement of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782095.png" />-stability imposes less rigid restrictions on the methods, unlike <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782096.png" />-stability. The <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782097.png" />-step method (10) for fixed <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782098.png" /> is said to be <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s08782099.png" />-stable if all solutions of (10) tend to zero as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820100.png" /> when (10) is applied to the equation (11), with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820101.png" /> a complex constant in the set | |
− | |||
− | |||
− | |||
− | |||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820102.png" /></td> </tr></table> | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | + | where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820103.png" /> is the complex plane. An explicit <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820104.png" />-step method cannot be <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820105.png" />-stable. There exists only one <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820106.png" />-stable <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820107.png" />-step method of order <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820108.png" />, namely (12). For <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820109.png" />, there exist <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820110.png" />-stable methods of order <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820111.png" /> with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820112.png" /> (see [[#References|[13]]]). | |
− | s | ||
− | |||
− | |||
− | |||
− | + | The following weakening of the requirement of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820113.png" />-stability is contained in the so-called definition of Gear (see [[#References|[14]]]): (10) is said to be a stiffly-stable method if all solutions of (10) tend to zero as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820115.png" /> when (10) is applied to equation (11). <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820116.png" /> is a complex constant in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820117.png" />, where | |
− | is the | ||
− | |||
− | |||
− | stable | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820118.png" /></td> </tr></table> | |
− | |||
− | |||
− | |||
− | |||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820119.png" /></td> </tr></table> | |
− | |||
− | |||
− | |||
− | |||
− | + | and the given accuracy is guaranteed for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820120.png" />. | |
− | |||
− | |||
− | and | + | Method (7) is stiffly stable, and thus <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820121.png" />-stable. In these methods, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820122.png" />. |
− | + | In accordance with (16), implicit analogues of the explicit Runge–Kutta methods of arbitrary order can be constructed that are moreover <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820123.png" />-stable and stiffly stable. For example, the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820124.png" />-nd order method | |
− | stable. | ||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820125.png" /></td> <td valign="top" style="width:5%;text-align:right;">(13)</td></tr></table> | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
Application of (13) to (3) leads to the difference equations | Application of (13) to (3) leads to the difference equations | ||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820126.png" /></td> </tr></table> | |
− | |||
− | |||
− | |||
− | |||
− | which prove that it is | + | which prove that it is <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820127.png" />-stable. Similarly, for the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820128.png" />-rd order method |
− | stable. Similarly, for the | ||
− | rd order method | ||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820129.png" /></td> <td valign="top" style="width:5%;text-align:right;">(14)</td></tr></table> | |
− | |||
− | |||
− | |||
− | |||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820130.png" /></td> </tr></table> | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820131.png" /></td> </tr></table> | |
− | |||
− | |||
the following difference equations are similarly obtained: | the following difference equations are similarly obtained: | ||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820132.png" /></td> </tr></table> | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | + | <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820133.png" />-stable and stiffly-stable methods of higher order are constructed in the same way. Methods (13) and (14), and those published in [[#References|[5]]], are fundamentally different from the so-called implicit Runge–Kutta methods (see [[#References|[16]]]), which have not become widely known because of their greater computational laboriousness. Application of implicit methods requires more computational effort at each step than is the case for explicit methods, but this is justified in stiff differential systems at the expense of a sharp increase in the size of the integration step. To solve (3), it is necessary to invert a matrix or to solve a system of linear equations. Here the problem of ill conditioning may arise, because the number of conditions on the matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820134.png" /> increases as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820135.png" /> increases. In the general case of (1), at every integration step it is necessary to solve a non-linear system of equations for the vector <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820136.png" />. It is common to use a modification of Newton's method with the initial condition calculated from an arbitrary extrapolation formula, using <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820137.png" />. The extrapolation method is then said to be a predictor and the implicit method a corrector. The method of simple iteration is inapplicable in stiff differential systems because of the large values of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820138.png" />. Since Newton's method is applied in implicit schemes to solve the equations <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820139.png" />, and thus it is necessary to compute the Jacobi matrix of (1), this matrix is sometimes inserted directly into the formulas of the methods that also have the property of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820140.png" />-stability for the solution of linear systems (see [[#References|[12]]], [[#References|[15]]]). Gear's procedure (Gear's method) is widely applied in the solution of stiff differential systems with automatic control of the error at each step, whence it alters the order of the method or the step size. Methods (7) are also used as correctors in Gear's procedure (see [[#References|[9]]]). Another approach to the creation of methods for the integration of stiff systems of equations is connected with a calculation of the corresponding linear systems in the formulas of the solution methods (see [[#References|[4]]]–[[#References|[8]]], [[#References|[10]]]). In the first articles in this direction, stiff systems of equations were discussed that had known eigenvalues for the matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820141.png" />, according to which the matrix entries of the methods were constructed. Because of the difficulty of solving the eigenvalue problem, this direction was not developed for a considerable time. In [[#References|[6]]], [[#References|[7]]], [[#References|[8]]], methods for constructing the matrix entries were proposed that did not require the solution of the eigenvalue problem for the matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820142.png" /> of the system (1). The methods in this direction can be constructed on the basis of the following equation (see [[#References|[7]]]): | |
− | stable and stiffly-stable methods of higher order are constructed in the same way. Methods (13) and (14), and those published in [[#References|[5]]], are fundamentally different from the so-called implicit Runge–Kutta methods (see [[#References|[16]]]), which have not become widely known because of their greater computational laboriousness. Application of implicit methods requires more computational effort at each step than is the case for explicit methods, but this is justified in stiff differential systems at the expense of a sharp increase in the size of the integration step. To solve (3), it is necessary to invert a matrix or to solve a system of linear equations. Here the problem of ill conditioning may arise, because the number of conditions on the matrix | ||
− | increases as | ||
− | increases. In the general case of (1), at every integration step it is necessary to solve a non-linear system of equations for the vector | ||
− | It is common to use a modification of Newton's method with the initial condition calculated from an arbitrary extrapolation formula, using | ||
− | The extrapolation method is then said to be a predictor and the implicit method a corrector. The method of simple iteration is inapplicable in stiff differential systems because of the large values of | ||
− | Since Newton's method is applied in implicit schemes to solve the equations | ||
− | and thus it is necessary to compute the Jacobi matrix of (1), this matrix is sometimes inserted directly into the formulas of the methods that also have the property of | ||
− | stability for the solution of linear systems (see [[#References|[12]]], [[#References|[15]]]). Gear's procedure (Gear's method) is widely applied in the solution of stiff differential systems with automatic control of the error at each step, whence it alters the order of the method or the step size. Methods (7) are also used as correctors in Gear's procedure (see [[#References|[9]]]). Another approach to the creation of methods for the integration of stiff systems of equations is connected with a calculation of the corresponding linear systems in the formulas of the solution methods (see [[#References|[4]]]–[[#References|[8]]], [[#References|[10]]]). In the first articles in this direction, stiff systems of equations were discussed that had known eigenvalues for the matrix | ||
− | according to which the matrix entries of the methods were constructed. Because of the difficulty of solving the eigenvalue problem, this direction was not developed for a considerable time. In [[#References|[6]]], [[#References|[7]]], [[#References|[8]]], methods for constructing the matrix entries were proposed that did not require the solution of the eigenvalue problem for the matrix | ||
− | of the system (1). The methods in this direction can be constructed on the basis of the following equation (see [[#References|[7]]]): | ||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820143.png" /></td> <td valign="top" style="width:5%;text-align:right;">(15)</td></tr></table> | |
− | |||
− | |||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820144.png" /></td> </tr></table> | |
− | - | ||
− | |||
− | |||
− | |||
− | |||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820145.png" /></td> </tr></table> | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820146.png" /></td> </tr></table> | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | where | + | where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820147.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820148.png" />, is a non-singular (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820149.png" />)-matrix and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820150.png" /> is a matrix not depending on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820151.png" />. |
− | |||
− | is a non-singular ( | ||
− | matrix and | ||
− | is a matrix not depending on | ||
− | A different choice of the matrices | + | A different choice of the matrices <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820152.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820153.png" /> leads to difference equations corresponding to some method of numerical integration if the right-hand side of (15) is disregarded. For <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820154.png" />, explicit methods are obtained, and for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820155.png" /> — implicit methods. Suppose that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820156.png" />. Then the implicit polygonal method (7) corresponds to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820157.png" />; the trapezium method (12) to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820158.png" />; and the explicit polygonal method to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820159.png" />. When <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820160.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820161.png" /> is a constant (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820162.png" />)-matrix, a generalized polygonal method is obtained (see [[#References|[7]]]): |
− | and | ||
− | leads to difference equations corresponding to some method of numerical integration if the right-hand side of (15) is disregarded. For | ||
− | explicit methods are obtained, and for | ||
− | implicit methods. Suppose that | ||
− | Then the implicit polygonal method (7) corresponds to | ||
− | the trapezium method (12) to | ||
− | and the explicit polygonal method to | ||
− | When | ||
− | where | ||
− | is a constant ( | ||
− | matrix, a generalized polygonal method is obtained (see [[#References|[7]]]): | ||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820163.png" /></td> <td valign="top" style="width:5%;text-align:right;">(16)</td></tr></table> | |
− | |||
− | |||
− | |||
− | Formula (16) becomes the explicit polygonal method when | + | Formula (16) becomes the explicit polygonal method when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820164.png" />. Method (16) gives an exact solution of the system of equations |
− | Method (16) gives an exact solution of the system of equations | ||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820165.png" /></td> <td valign="top" style="width:5%;text-align:right;">(17)</td></tr></table> | |
− | + | for the discrete values <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820166.png" />. If the matrix | |
− | |||
− | |||
− | |||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820167.png" /></td> </tr></table> | |
− | |||
− | + | is known, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820168.png" /> iterations of the recurrence relation | |
− | |||
− | |||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820169.png" /></td> <td valign="top" style="width:5%;text-align:right;">(18)</td></tr></table> | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
leads to the matrix | leads to the matrix | ||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820170.png" /></td> </tr></table> | |
− | |||
− | |||
− | applied in (16). As a first approximation to (18) for sufficiently-small | + | applied in (16). As a first approximation to (18) for sufficiently-small <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820171.png" />, it is appropriate to use the approximation formula |
− | it is appropriate to use the approximation formula | ||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820172.png" /></td> <td valign="top" style="width:5%;text-align:right;">(19)</td></tr></table> | |
− | |||
or, if the eigenvalues of the matrix are real, | or, if the eigenvalues of the matrix are real, | ||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820173.png" /></td> <td valign="top" style="width:5%;text-align:right;">(20)</td></tr></table> | |
− | |||
− | |||
− | |||
− | |||
− | Formula (19) gives a Lyapunov-stable correspondence between solutions of the differential system (17) and of the difference system (16) when | + | Formula (19) gives a Lyapunov-stable correspondence between solutions of the differential system (17) and of the difference system (16) when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820174.png" /> has complex eigenvalues with small real part. If the domain <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820175.png" /> of existence of the solutions is closed, convex in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820176.png" />, and if the approximate solutions belong to this domain, then the error in (16) satisfies the following difference equation (see [[#References|[7]]]): |
− | has complex eigenvalues with small real part. If the domain | ||
− | of existence of the solutions is closed, convex in | ||
− | and if the approximate solutions belong to this domain, then the error in (16) satisfies the following difference equation (see [[#References|[7]]]): | ||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820177.png" /></td> </tr></table> | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820178.png" /></td> </tr></table> | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | where | + | where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820179.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820180.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820181.png" /> are solutions of (1) and (16), respectively. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820182.png" /> be the norm of the vector (the norms of the matrices are subordinated to this vector norm), and suppose that the following inequalities are satisfied in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820183.png" />: |
− | and | ||
− | |||
− | are solutions of (1) and (16), respectively. Let | ||
− | be the norm of the vector (the norms of the matrices are subordinated to this vector norm), and suppose that the following inequalities are satisfied in | ||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820184.png" /></td> </tr></table> | |
− | |||
− | |||
− | |||
− | |||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820185.png" /></td> </tr></table> | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | The following number is computed for a real matrix | + | The following number is computed for a real matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820186.png" />: |
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820187.png" /></td> </tr></table> | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
Then the following estimates hold: | Then the following estimates hold: | ||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820188.png" /></td> </tr></table> | |
− | |||
− | |||
− | |||
− | |||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820189.png" /></td> </tr></table> | |
− | |||
− | |||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820190.png" /></td> </tr></table> | |
− | |||
− | |||
− | |||
− | |||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820191.png" /></td> </tr></table> | |
− | |||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820192.png" /></td> </tr></table> | |
− | |||
− | |||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820193.png" /></td> </tr></table> | |
− | |||
− | |||
− | |||
− | |||
− | + | If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820194.png" />, the error <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820195.png" /> can be estimated on the assumption that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820196.png" />. Other vector norms are possible in the estimates, with corresponding matrix norms and logarithmic norms (see [[#References|[3]]]). These estimates prove that in the solution of (1), the integration step <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820197.png" /> can be taken significantly larger than in classical methods. The matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820198.png" /> must be chosen in such a way that all its entries are close to those of the Jacobi matrix of the system (1). In the boundary layer, when the variables change rapidly, by estimating <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820199.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820200.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820201.png" />, and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820202.png" /> roughly for the approximate solution, it is possible to change <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820203.png" /> so as to obtain the necessary accuracy. Since the variables in (1) change slowly across the boundary layer, it often turns out that one matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820204.png" /> is sufficient to compute all solutions when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820205.png" />. The checking of the accuracy can be accomplished by using Runge's rule (see [[#References|[1]]]). | |
− | 0 | ||
− | |||
− | |||
− | |||
− | + | To increase the accuracy, a class of methods for the numerical integration of systems has been suggested in [[#References|[7]]], based on the method (16). The following requirements are satisfied for the methods in this class: 1) an <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820206.png" />-th order method must be exact for the integration of algebraic polynomials of degree <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820207.png" /> when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820208.png" />; and 2) a method of arbitrary order must be exact for the solution of (17). | |
− | the | ||
− | |||
− | |||
− | |||
− | must be | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | + | The one-step <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820209.png" />-nd order method has the form: | |
− | |||
− | |||
− | |||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820210.png" /></td> </tr></table> | |
− | |||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820211.png" /></td> </tr></table> | |
− | |||
− | |||
− | + | Explicit one-step 3th order methods and multi-step methods for systems are constructed in the same way. Asymptotic estimates for the errors have been found. For <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820212.png" />, these methods become formulas of Runge–Kutta or Adams type. In the approximation to integrals of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820213.png" /> by fractional-linear matrix polynomials in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820214.png" />, taking into account the non-singularity of the corresponding matrices, the methods for systems go over to the explicit formulas of the corresponding implicit methods obtained after iteration by Newton's method. The following is an implicit <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820215.png" />-st order method for systems: | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820216.png" /></td> <td valign="top" style="width:5%;text-align:right;">(21)</td></tr></table> | |
− | |||
− | |||
− | |||
− | |||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820217.png" /></td> </tr></table> | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
It is obtained from (15) if one chooses | It is obtained from (15) if one chooses | ||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820218.png" /></td> </tr></table> | |
− | |||
− | |||
− | |||
− | The equations (21) are obtained by the method of reducing differential equations to integral equations (see [[#References|[8]]]). The integral of | + | The equations (21) are obtained by the method of reducing differential equations to integral equations (see [[#References|[8]]]). The integral of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087820/s087820219.png" /> in (21) is calculated according to (18) with the initial condition (20). A method of correction of this integral in the course of solving has been investigated (see ). |
− | in (21) is calculated according to (18) with the initial condition (20). A method of correction of this integral in the course of solving has been investigated (see ). | ||
====References==== | ====References==== | ||
<table><TR><TD valign="top">[1]</TD> <TD valign="top"> N.S. Bakhvalov, "Numerical methods: analysis, algebra, ordinary differential equations" , MIR (1977) (Translated from Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top"> A.B. Vasil'eva, "Constructions of uniform approximations to solutions of systems of differential equations with small parameter in front of the leading derivative" ''Mat. Sb.'' , '''50''' (1960) pp. 43–58 (In Russian)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top"> B.F. Bylov, R.E. Vinograd, D.M. Grobman, V.V. Nemytskii, "The theory of Lyapunov exponents and its applications to problems of stability" , Moscow (1966) (In Russian)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top"> M.K. Gavurin, "An experiment in the numerical integration of ordinary differential equations" ''Met. Vychisl.'' , '''1''' (1963) pp. 45–51 (In Russian)</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top"> Yu.V. Rakitskii, "Asymptotic error formulas for solutions of systems of ordinary differential equations by functional numerical methods" ''Soviet Math. Dokl.'' , '''11''' : 4 (1970) pp. 861–863 ''Dokl. Akad. Nauk SSSR'' , '''193''' : 1 (1970) pp. 40–42</TD></TR><TR><TD valign="top">[6]</TD> <TD valign="top"> Yu.V. Rakitskii, "Methods for successive step increase in the numerical integration of systems of ordinary differential equations" ''Soviet Math. Dokl.'' , '''13''' : 6 (1972) pp. 1624–1627 ''Dokl. Akad. Nauk SSSR'' , '''207''' : 4 (1972) pp. 793–795</TD></TR><TR><TD valign="top">[7]</TD> <TD valign="top"> Yu.V. Rakitskii, ''Trudy Leningrad. Polytech. Inst.'' , '''332''' (1973) pp. 88–97</TD></TR><TR><TD valign="top">[8]</TD> <TD valign="top"> B.V. Pavlov, A.Ya. Povzner, "A method for the numerical integration of systems of ordinary differential equations" ''USSR Comp. Math. Math. Phys.'' , '''13''' : 4 (1973) pp. 292–297 ''Zh. Vychisl. Mat. i Mat. Fiz.'' , '''13''' : 4 (1973) pp. 1056–1059</TD></TR><TR><TD valign="top">[9]</TD> <TD valign="top"> C.F. Curtiss, J.O. Hirschfelder, ''Proc. Nat. Acad. Sci. USA'' , '''38''' (1962) pp. 235–243</TD></TR><TR><TD valign="top">[10]</TD> <TD valign="top"> R.H.S. Mah, S. Michaelson, R.W. Sargent, ''J. Chem. Eng. Sci.'' , '''17''' (1962) pp. 619–639</TD></TR><TR><TD valign="top">[11]</TD> <TD valign="top"> G. Dahlquist, "A special stability problem for linear multistep methods" ''Nordisk. Tidskr. Informationsbehandling'' , '''3''' (1963) pp. 27–43</TD></TR><TR><TD valign="top">[12]</TD> <TD valign="top"> H.H. Rosenbrock, "Some general implicit processes for the numerical solution of differential equations" ''Comput. J.'' , '''5''' (1963) pp. 329–330</TD></TR><TR><TD valign="top">[13]</TD> <TD valign="top"> O.B. Widlund, "A note on unconditionally stable linear multistep methods" ''Nordisk. Tidskr. Informationsbehandling'' , '''7''' (1967) pp. 65–70</TD></TR><TR><TD valign="top">[14]</TD> <TD valign="top"> C.W. Gear, "The automatic integration of stiff ordinary differential equations (with discussion)" , ''Information processing 68'' , '''1''' , North-Holland (1969) pp. 187–193</TD></TR><TR><TD valign="top">[15]</TD> <TD valign="top"> J.D. Lambert, S.T. Sigurdsson, "Multistep methods with variable matrix coefficients" ''SIAM J. Numer. Anal.'' , '''9''' (1972) pp. 715–733</TD></TR><TR><TD valign="top">[16]</TD> <TD valign="top"> J.D. Lambert, "Computational methods in ordinary differential equations" , Wiley (1973)</TD></TR><TR><TD valign="top">[17]</TD> <TD valign="top"> R.A. Willoughby (ed.) , ''Stiff differential systems'' , ''The IBM research symposia series'' , Plenum (1974)</TD></TR></table> | <table><TR><TD valign="top">[1]</TD> <TD valign="top"> N.S. Bakhvalov, "Numerical methods: analysis, algebra, ordinary differential equations" , MIR (1977) (Translated from Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top"> A.B. Vasil'eva, "Constructions of uniform approximations to solutions of systems of differential equations with small parameter in front of the leading derivative" ''Mat. Sb.'' , '''50''' (1960) pp. 43–58 (In Russian)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top"> B.F. Bylov, R.E. Vinograd, D.M. Grobman, V.V. Nemytskii, "The theory of Lyapunov exponents and its applications to problems of stability" , Moscow (1966) (In Russian)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top"> M.K. Gavurin, "An experiment in the numerical integration of ordinary differential equations" ''Met. Vychisl.'' , '''1''' (1963) pp. 45–51 (In Russian)</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top"> Yu.V. Rakitskii, "Asymptotic error formulas for solutions of systems of ordinary differential equations by functional numerical methods" ''Soviet Math. Dokl.'' , '''11''' : 4 (1970) pp. 861–863 ''Dokl. Akad. Nauk SSSR'' , '''193''' : 1 (1970) pp. 40–42</TD></TR><TR><TD valign="top">[6]</TD> <TD valign="top"> Yu.V. Rakitskii, "Methods for successive step increase in the numerical integration of systems of ordinary differential equations" ''Soviet Math. Dokl.'' , '''13''' : 6 (1972) pp. 1624–1627 ''Dokl. Akad. Nauk SSSR'' , '''207''' : 4 (1972) pp. 793–795</TD></TR><TR><TD valign="top">[7]</TD> <TD valign="top"> Yu.V. Rakitskii, ''Trudy Leningrad. Polytech. Inst.'' , '''332''' (1973) pp. 88–97</TD></TR><TR><TD valign="top">[8]</TD> <TD valign="top"> B.V. Pavlov, A.Ya. Povzner, "A method for the numerical integration of systems of ordinary differential equations" ''USSR Comp. Math. Math. Phys.'' , '''13''' : 4 (1973) pp. 292–297 ''Zh. Vychisl. Mat. i Mat. Fiz.'' , '''13''' : 4 (1973) pp. 1056–1059</TD></TR><TR><TD valign="top">[9]</TD> <TD valign="top"> C.F. Curtiss, J.O. Hirschfelder, ''Proc. Nat. Acad. Sci. USA'' , '''38''' (1962) pp. 235–243</TD></TR><TR><TD valign="top">[10]</TD> <TD valign="top"> R.H.S. Mah, S. Michaelson, R.W. Sargent, ''J. Chem. Eng. Sci.'' , '''17''' (1962) pp. 619–639</TD></TR><TR><TD valign="top">[11]</TD> <TD valign="top"> G. Dahlquist, "A special stability problem for linear multistep methods" ''Nordisk. Tidskr. Informationsbehandling'' , '''3''' (1963) pp. 27–43</TD></TR><TR><TD valign="top">[12]</TD> <TD valign="top"> H.H. Rosenbrock, "Some general implicit processes for the numerical solution of differential equations" ''Comput. J.'' , '''5''' (1963) pp. 329–330</TD></TR><TR><TD valign="top">[13]</TD> <TD valign="top"> O.B. Widlund, "A note on unconditionally stable linear multistep methods" ''Nordisk. Tidskr. Informationsbehandling'' , '''7''' (1967) pp. 65–70</TD></TR><TR><TD valign="top">[14]</TD> <TD valign="top"> C.W. Gear, "The automatic integration of stiff ordinary differential equations (with discussion)" , ''Information processing 68'' , '''1''' , North-Holland (1969) pp. 187–193</TD></TR><TR><TD valign="top">[15]</TD> <TD valign="top"> J.D. Lambert, S.T. Sigurdsson, "Multistep methods with variable matrix coefficients" ''SIAM J. Numer. Anal.'' , '''9''' (1972) pp. 715–733</TD></TR><TR><TD valign="top">[16]</TD> <TD valign="top"> J.D. Lambert, "Computational methods in ordinary differential equations" , Wiley (1973)</TD></TR><TR><TD valign="top">[17]</TD> <TD valign="top"> R.A. Willoughby (ed.) , ''Stiff differential systems'' , ''The IBM research symposia series'' , Plenum (1974)</TD></TR></table> | ||
+ | |||
+ | |||
====Comments==== | ====Comments==== |
Revision as of 14:53, 7 June 2020
A system of ordinary differential equations in the numerical solution of which by explicit methods of Runge–Kutta or Adams type, the integration step has to remain small despite the slow change in the desired variables. Attempts to reduce the time for calculating the solution of a stiff differential system at the expense of increasing the integration step lead to a sharp increase in the error (error explosion).
An autonomous system of ordinary differential equations
(1) |
is said to be stiff if, for any initial values , the following conditions are satisfied on a given interval contained in the interval of existence of a solution of (1):
a) the maximum modulus of the eigenvalues of the Jacobi matrix (the spectral radius) is bounded along the solution :
b) there exist numbers such that if
then the inequality
is satisfied; here
is the fundamental matrix of the system (1),
and is the length of the boundary layer. All systems of type (1) for which the conditions a) and b) are satisfied simultaneously after scaling the components of the vectors for each solution, are called stiff.
A non-autonomous normal system of ordinary differential equations of order is said to be stiff if the autonomous system of order equivalent to it is stiff. The following scalar equation is an example of a stiff non-autonomous equation:
(2) |
where , , is a given function. Another example of a stiff differential system is the linear homogeneous system
(3) |
with constant -matrix having different eigenvalues divided into two groups:
(4) |
Condition a) is obviously satisfied for (3). The norm is bounded above by for , where and are certain constants. When , , condition b) is satisfied. The vector is bounded above if :
Thus it is possible to construct for an algebraic interpolation polynomial of degree zero when , with uniformly-distributed nodes, and to express the interpolation step in the form , where is some constant (see [1]) depending on the given error. On the other hand, use of Euler's polygonal method for (3) requires an upper bound on the size of the integration step on the entire interval of a solution of the system (see [1]):
(5) |
Here the components of the approximate solution to the system (3) by Euler's polygonal method corresponding to the first group of eigenvalues (4) will be represented with sufficient accuracy (see [1]):
Restrictions of the form (5) on the integration step are characteristic for extrapolation methods of Runge–Kutta or Adams type. The quotient , which can be regarded as a qualitative measure of the stiffness of the system (3), attains values of order – in many cases. The mathematical descriptions of dynamical processes and phenomena in physics, chemistry, biology, technology, and economics connected with the calculation of an ever larger number of factors raising the order of the differential systems, lead to an increase in stiffness. Stiff differential systems require special methods of solution.
In certain cases, the original system (1) can be transformed using the theory of, and asymptotic methods for, differential equations with a small parameter in front of the leading derivatives, the behaviour of the solution of which in the boundary layer is described in terms of exponentially-damped functions (see [2]). However, it is usually difficult to reduce a stiff differential system to a similar form, and, moreover, the stiffness is not always essentially decreased after asymptotic transformation. Thus, numerical methods are used to solve systems of a general form. The main property of these methods is that they guarantee a suppression of the rapidly-damping components of the solution of (1) outside the boundary layer for a value of the integration step used near to that of the algebraic interpolation step.
It is appropriate to use implicit methods (see [9], [17]) in the solution of stiff differential systems. To construct implicit schemes, the method of undetermined coefficients can be used. On the basis of Taylor's formula it can be written in the form
(6) |
where is a positive integer, , . For example, such methods are described in [9]:
(7) |
, , — the implicit polygonal method;
, , , — the implicit -nd order method;
, , , , — the implicit -rd order method;
, , , , , — the implicit 4th order method.
By the order of a method is understood the highest power in the expansion (7) in powers of whose coefficient is the same as the corresponding coefficient in (6).
Application of the implicit polygonal method to the system (3) leads to the difference equations
(8) |
Suppose that the system (3) is asymptotically Lyapunov stable. Then the matrix is non-singular for all . The Lagrange–Sylvester formula can be used to represent the solution of (8) in the form
(9) |
For the implicit polygonal method (8), the condition of asymptotic stability,
is satisfied for all , and in (9) the components of the solution corresponding to the second group of eigenvalues (4) will be rapidly decreasing with increasing . The value of is restricted only by the requirements of the desired accuracy of the approximate solutions. The tendency to raise the order of linear multi-step methods leads to a definite contradiction to their stability (see [11]).
The -step method
(10) |
is said to be -stable if all solutions of (10) tend to zero as for a fixed positive in the case when (10) is applied to the scalar equation
(11) |
with a complex constant with negative real part. An explicit -step method cannot be -stable. The order of a linear multi-step -stable method cannot exceed two. The smallest error constant is obtained for the implicit trapezium method (see [11]):
(12) |
The requirement of -stability imposes less rigid restrictions on the methods, unlike -stability. The -step method (10) for fixed is said to be -stable if all solutions of (10) tend to zero as when (10) is applied to the equation (11), with a complex constant in the set
where is the complex plane. An explicit -step method cannot be -stable. There exists only one -stable -step method of order , namely (12). For , there exist -stable methods of order with (see [13]).
The following weakening of the requirement of -stability is contained in the so-called definition of Gear (see [14]): (10) is said to be a stiffly-stable method if all solutions of (10) tend to zero as when (10) is applied to equation (11). is a complex constant in , where
and the given accuracy is guaranteed for .
Method (7) is stiffly stable, and thus -stable. In these methods, .
In accordance with (16), implicit analogues of the explicit Runge–Kutta methods of arbitrary order can be constructed that are moreover -stable and stiffly stable. For example, the -nd order method
(13) |
Application of (13) to (3) leads to the difference equations
which prove that it is -stable. Similarly, for the -rd order method
(14) |
the following difference equations are similarly obtained:
-stable and stiffly-stable methods of higher order are constructed in the same way. Methods (13) and (14), and those published in [5], are fundamentally different from the so-called implicit Runge–Kutta methods (see [16]), which have not become widely known because of their greater computational laboriousness. Application of implicit methods requires more computational effort at each step than is the case for explicit methods, but this is justified in stiff differential systems at the expense of a sharp increase in the size of the integration step. To solve (3), it is necessary to invert a matrix or to solve a system of linear equations. Here the problem of ill conditioning may arise, because the number of conditions on the matrix increases as increases. In the general case of (1), at every integration step it is necessary to solve a non-linear system of equations for the vector . It is common to use a modification of Newton's method with the initial condition calculated from an arbitrary extrapolation formula, using . The extrapolation method is then said to be a predictor and the implicit method a corrector. The method of simple iteration is inapplicable in stiff differential systems because of the large values of . Since Newton's method is applied in implicit schemes to solve the equations , and thus it is necessary to compute the Jacobi matrix of (1), this matrix is sometimes inserted directly into the formulas of the methods that also have the property of -stability for the solution of linear systems (see [12], [15]). Gear's procedure (Gear's method) is widely applied in the solution of stiff differential systems with automatic control of the error at each step, whence it alters the order of the method or the step size. Methods (7) are also used as correctors in Gear's procedure (see [9]). Another approach to the creation of methods for the integration of stiff systems of equations is connected with a calculation of the corresponding linear systems in the formulas of the solution methods (see [4]–[8], [10]). In the first articles in this direction, stiff systems of equations were discussed that had known eigenvalues for the matrix , according to which the matrix entries of the methods were constructed. Because of the difficulty of solving the eigenvalue problem, this direction was not developed for a considerable time. In [6], [7], [8], methods for constructing the matrix entries were proposed that did not require the solution of the eigenvalue problem for the matrix of the system (1). The methods in this direction can be constructed on the basis of the following equation (see [7]):
(15) |
where , , is a non-singular ()-matrix and is a matrix not depending on .
A different choice of the matrices and leads to difference equations corresponding to some method of numerical integration if the right-hand side of (15) is disregarded. For , explicit methods are obtained, and for — implicit methods. Suppose that . Then the implicit polygonal method (7) corresponds to ; the trapezium method (12) to ; and the explicit polygonal method to . When , where is a constant ()-matrix, a generalized polygonal method is obtained (see [7]):
(16) |
Formula (16) becomes the explicit polygonal method when . Method (16) gives an exact solution of the system of equations
(17) |
for the discrete values . If the matrix
is known, then iterations of the recurrence relation
(18) |
leads to the matrix
applied in (16). As a first approximation to (18) for sufficiently-small , it is appropriate to use the approximation formula
(19) |
or, if the eigenvalues of the matrix are real,
(20) |
Formula (19) gives a Lyapunov-stable correspondence between solutions of the differential system (17) and of the difference system (16) when has complex eigenvalues with small real part. If the domain of existence of the solutions is closed, convex in , and if the approximate solutions belong to this domain, then the error in (16) satisfies the following difference equation (see [7]):
where and , are solutions of (1) and (16), respectively. Let be the norm of the vector (the norms of the matrices are subordinated to this vector norm), and suppose that the following inequalities are satisfied in :
The following number is computed for a real matrix :
Then the following estimates hold:
If , the error can be estimated on the assumption that . Other vector norms are possible in the estimates, with corresponding matrix norms and logarithmic norms (see [3]). These estimates prove that in the solution of (1), the integration step can be taken significantly larger than in classical methods. The matrix must be chosen in such a way that all its entries are close to those of the Jacobi matrix of the system (1). In the boundary layer, when the variables change rapidly, by estimating , , , and roughly for the approximate solution, it is possible to change so as to obtain the necessary accuracy. Since the variables in (1) change slowly across the boundary layer, it often turns out that one matrix is sufficient to compute all solutions when . The checking of the accuracy can be accomplished by using Runge's rule (see [1]).
To increase the accuracy, a class of methods for the numerical integration of systems has been suggested in [7], based on the method (16). The following requirements are satisfied for the methods in this class: 1) an -th order method must be exact for the integration of algebraic polynomials of degree when ; and 2) a method of arbitrary order must be exact for the solution of (17).
The one-step -nd order method has the form:
Explicit one-step 3th order methods and multi-step methods for systems are constructed in the same way. Asymptotic estimates for the errors have been found. For , these methods become formulas of Runge–Kutta or Adams type. In the approximation to integrals of by fractional-linear matrix polynomials in , taking into account the non-singularity of the corresponding matrices, the methods for systems go over to the explicit formulas of the corresponding implicit methods obtained after iteration by Newton's method. The following is an implicit -st order method for systems:
(21) |
It is obtained from (15) if one chooses
The equations (21) are obtained by the method of reducing differential equations to integral equations (see [8]). The integral of in (21) is calculated according to (18) with the initial condition (20). A method of correction of this integral in the course of solving has been investigated (see ).
References
[1] | N.S. Bakhvalov, "Numerical methods: analysis, algebra, ordinary differential equations" , MIR (1977) (Translated from Russian) |
[2] | A.B. Vasil'eva, "Constructions of uniform approximations to solutions of systems of differential equations with small parameter in front of the leading derivative" Mat. Sb. , 50 (1960) pp. 43–58 (In Russian) |
[3] | B.F. Bylov, R.E. Vinograd, D.M. Grobman, V.V. Nemytskii, "The theory of Lyapunov exponents and its applications to problems of stability" , Moscow (1966) (In Russian) |
[4] | M.K. Gavurin, "An experiment in the numerical integration of ordinary differential equations" Met. Vychisl. , 1 (1963) pp. 45–51 (In Russian) |
[5] | Yu.V. Rakitskii, "Asymptotic error formulas for solutions of systems of ordinary differential equations by functional numerical methods" Soviet Math. Dokl. , 11 : 4 (1970) pp. 861–863 Dokl. Akad. Nauk SSSR , 193 : 1 (1970) pp. 40–42 |
[6] | Yu.V. Rakitskii, "Methods for successive step increase in the numerical integration of systems of ordinary differential equations" Soviet Math. Dokl. , 13 : 6 (1972) pp. 1624–1627 Dokl. Akad. Nauk SSSR , 207 : 4 (1972) pp. 793–795 |
[7] | Yu.V. Rakitskii, Trudy Leningrad. Polytech. Inst. , 332 (1973) pp. 88–97 |
[8] | B.V. Pavlov, A.Ya. Povzner, "A method for the numerical integration of systems of ordinary differential equations" USSR Comp. Math. Math. Phys. , 13 : 4 (1973) pp. 292–297 Zh. Vychisl. Mat. i Mat. Fiz. , 13 : 4 (1973) pp. 1056–1059 |
[9] | C.F. Curtiss, J.O. Hirschfelder, Proc. Nat. Acad. Sci. USA , 38 (1962) pp. 235–243 |
[10] | R.H.S. Mah, S. Michaelson, R.W. Sargent, J. Chem. Eng. Sci. , 17 (1962) pp. 619–639 |
[11] | G. Dahlquist, "A special stability problem for linear multistep methods" Nordisk. Tidskr. Informationsbehandling , 3 (1963) pp. 27–43 |
[12] | H.H. Rosenbrock, "Some general implicit processes for the numerical solution of differential equations" Comput. J. , 5 (1963) pp. 329–330 |
[13] | O.B. Widlund, "A note on unconditionally stable linear multistep methods" Nordisk. Tidskr. Informationsbehandling , 7 (1967) pp. 65–70 |
[14] | C.W. Gear, "The automatic integration of stiff ordinary differential equations (with discussion)" , Information processing 68 , 1 , North-Holland (1969) pp. 187–193 |
[15] | J.D. Lambert, S.T. Sigurdsson, "Multistep methods with variable matrix coefficients" SIAM J. Numer. Anal. , 9 (1972) pp. 715–733 |
[16] | J.D. Lambert, "Computational methods in ordinary differential equations" , Wiley (1973) |
[17] | R.A. Willoughby (ed.) , Stiff differential systems , The IBM research symposia series , Plenum (1974) |
Comments
The use of asymptotic methods for the system (1) as described in [2] has also been investigated in [a1]. The development of numerical methods for stiff differential equations was stimulated by [a2]. A number of theses has recently been devoted to this subject. For an introduction to the theory of stiff differential equations, [a3] can be used as reference.
References
[a1] | J.E. Flaherty, R.E. O'Malley, "Numerical methods for stiff systems of two-point boundary value problems" SIAM J. Sci. and Statist. Comp. , 5 (1984) pp. 865–886 |
[a2] | W. Lininger, R.A. Willoughby, "Efficient integration methods for stiff systems of ordinary differential equations" SIAM J. Numer. Anal. , 7 (1970) pp. 47–66 |
[a3] | K. Dekker, J.G. Verwer, "Stability of Runge–Kutta methods for stiff nonlinear differential equations" , CWI Monographs , 2 , North-Holland (1984) |
[a4] | W.L. Mirankar, "Numerical methods for stiff equations" , Reidel (1981) |
Stiff differential system. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Stiff_differential_system&oldid=49448