Namespaces
Variants
Actions

Difference between revisions of "Stiff differential system"

From Encyclopedia of Mathematics
Jump to: navigation, search
m (tex encoded by computer)
m (fixing subscripts)
Line 70: Line 70:
  
 
$$  
 
$$  
K ( t + \tau , t )  =  X ( t + \tau ) X  ^ {-} 1 ( t) ,
+
K ( t + \tau , t )  =  X ( t + \tau ) X  ^ {-1} ( t) ,
 
$$
 
$$
  
Line 77: Line 77:
  
 
$$  
 
$$  
\| A \|  =  \max _ { i }  \sum _ { k= } 1 ^ { m }  | a _ {ik} | ,
+
\| A \|  =  \max _ { i }  \sum _ { k= 1} ^ { m }  | a _ {ik} | ,
 
$$
 
$$
  
Line 153: Line 153:
 
{}  
 
{}  
 
\frac{d z ( t) }{dt}
 
\frac{d z ( t) }{dt}
  \right | _ {t=} 0 \right \| .
+
  \right | _ {t=0} \right \| .
 
$$
 
$$
  
Line 198: Line 198:
  
 
$$ \tag{6 }
 
$$ \tag{6 }
z ( t _ {n+} 1 )  =  z ( t _ {n} ) + \sum _ {\nu = 1 } ^ { p }  \left . (- 1) ^ {\nu + 1 }  
+
z ( t _ {n+1} )  =  z ( t _ {n} ) + \sum _ {\nu = 1 } ^ { p }  \left . (- 1) ^ {\nu + 1 }  
 
\frac{d  ^  \nu  z ( t) }{d t  ^  \nu  }
 
\frac{d  ^  \nu  z ( t) }{d t  ^  \nu  }
  \right | _ {t = t _ {n+} 1 }  \cdot H \nu +
+
  \right | _ {t = t _ {n+1} }  \cdot H \nu +
 
$$
 
$$
  
Line 208: Line 208:
 
\frac{\tau  ^ {p} }{p!}
 
\frac{\tau  ^ {p} }{p!}
 
   
 
   
\frac{d  ^ {p+} 1 z ( t) }{dt  ^ {p+} 1 }
+
\frac{d  ^ {p+1} z ( t) }{dt  ^ {p+1} }
 
  \right | _ {t = t _ {n}  + \tau }  d \tau ,
 
  \right | _ {t = t _ {n}  + \tau }  d \tau ,
 
$$
 
$$
Line 220: Line 220:
 
\sum _ {\chi = 0 } ^ { r }  
 
\sum _ {\chi = 0 } ^ { r }  
 
a _  \chi  y _ {n + 1 - \chi }  = \  
 
a _  \chi  y _ {n + 1 - \chi }  = \  
H f ( y _ {n+} 1 ) ,\  y _ {n}  =  y ( t _ {n} ) ,
+
H f ( y _ {n+1} ) ,\  y _ {n}  =  y ( t _ {n} ) ,
 
$$
 
$$
  
 
$  r = 1 $,  
 
$  r = 1 $,  
 
$  a _ {0} = 1 $,  
 
$  a _ {0} = 1 $,  
$  a _ {1} = - 1 $—  
+
$  a _ {1} = - 1 $ — the implicit polygonal method;
the implicit polygonal method;
 
  
 
$  r = 2 $,  
 
$  r = 2 $,  
 
$  a _ {0} = 3 / 2 $,  
 
$  a _ {0} = 3 / 2 $,  
 
$  a _ {1} = - 2 $,  
 
$  a _ {1} = - 2 $,  
$  a _ {2} = 1 / 2 $—  
+
$  a _ {2} = 1 / 2 $ — the implicit  $  2 $-nd order method;
the implicit  $  2 $-
 
nd order method;
 
  
 
$  r = 3 $,  
 
$  r = 3 $,  
Line 239: Line 236:
 
$  a _ {1} = - 3 $,  
 
$  a _ {1} = - 3 $,  
 
$  a _ {2} = 3 / 2 $,  
 
$  a _ {2} = 3 / 2 $,  
$  a _ {3} = - 1 / 3 $—  
+
$  a _ {3} = - 1 / 3 $ — the implicit  $  3 $-rd order method;
the implicit  $  3 $-
 
rd order method;
 
  
 
$  r = 4 $,  
 
$  r = 4 $,  
Line 248: Line 243:
 
$  a _ {2} = 3 $,  
 
$  a _ {2} = 3 $,  
 
$  a _ {3} = - 4 / 3 $,  
 
$  a _ {3} = - 4 / 3 $,  
$  a _ {4} = 1 / 4 $—  
+
$  a _ {4} = 1 / 4 $ — the implicit 4th order method.
the implicit 4th order method.
 
  
 
By the order of a method is understood the highest power  $  H  ^ {k} $
 
By the order of a method is understood the highest power  $  H  ^ {k} $
Line 258: Line 252:
  
 
$$ \tag{8 }
 
$$ \tag{8 }
( E - H A ) y _ {n+} 1 =  y _ {n} ,\ \  
+
( E - H A ) y _ {n+1}  =  y _ {n} ,\ \  
 
y _ {0}  =  z  ^ {0} .
 
y _ {0}  =  z  ^ {0} .
 
$$
 
$$
Line 267: Line 261:
  
 
$$ \tag{9 }
 
$$ \tag{9 }
y _ {n}  =  \sum _ { i= } 1 ^ { m }  P _ {i} ( A , \lambda _ {1} \dots
+
y _ {n}  =  \sum _ { i= 1} ^ { m }  P _ {i} ( A , \lambda _ {1} \dots
 
\lambda _ {m} )  
 
\lambda _ {m} )  
 
\frac{z  ^ {0} }{( 1 - H \lambda _ {i} )  ^ {n} }
 
\frac{z  ^ {0} }{( 1 - H \lambda _ {i} )  ^ {n} }
Line 286: Line 280:
 
is restricted only by the requirements of the desired accuracy of the approximate solutions. The tendency to raise the order of linear multi-step methods leads to a definite contradiction to their stability (see [[#References|[11]]]).
 
is restricted only by the requirements of the desired accuracy of the approximate solutions. The tendency to raise the order of linear multi-step methods leads to a definite contradiction to their stability (see [[#References|[11]]]).
  
The  $  r $-
+
The  $  r $-step method
step method
 
  
 
$$ \tag{10 }
 
$$ \tag{10 }
Line 293: Line 286:
 
$$
 
$$
  
is said to be  $  A $-
+
is said to be  $  A $-stable if all solutions of (10) tend to zero as  $  n \rightarrow \infty $
stable if all solutions of (10) tend to zero as  $  n \rightarrow \infty $
 
 
for a fixed positive  $  H $
 
for a fixed positive  $  H $
 
in the case when (10) is applied to the scalar equation
 
in the case when (10) is applied to the scalar equation
Line 305: Line 297:
  
 
with  $  q $
 
with  $  q $
a complex constant with negative real part. An explicit  $  r $-
+
a complex constant with negative real part. An explicit  $  r $-step method cannot be  $  A $-stable. The order of a linear multi-step  $  A $-stable method cannot exceed two. The smallest error constant  $  c _  \delta  = 1 / 12 $
step method cannot be  $  A $-
 
stable. The order of a linear multi-step  $  A $-
 
stable method cannot exceed two. The smallest error constant  $  c _  \delta  = 1 / 12 $
 
 
is obtained for the implicit trapezium method (see [[#References|[11]]]):
 
is obtained for the implicit trapezium method (see [[#References|[11]]]):
  
 
$$ \tag{12 }
 
$$ \tag{12 }
y _ {n+} 1 =  y _ {n} +  
+
y _ {n+1}  =  y _ {n} +  
 
\frac{H}{2}
 
\frac{H}{2}
  [ f ( y _ {n+} 1 ) + f ( y _ {n} ) ] .
+
  [ f ( y _ {n+1} ) + f ( y _ {n} ) ] .
 
$$
 
$$
  
The requirement of  $  A ( \alpha ) $-
+
The requirement of  $  A ( \alpha ) $-stability imposes less rigid restrictions on the methods, unlike  $  A $-stability. The  $  r $-step method (10) for fixed  $  H > 0 $
stability imposes less rigid restrictions on the methods, unlike  $  A $-
+
is said to be  $  A ( \alpha ) $-stable if all solutions of (10) tend to zero as  $  n \rightarrow \infty $
stability. The  $  r $-
 
step method (10) for fixed  $  H > 0 $
 
is said to be  $  A ( \alpha ) $-
 
stable if all solutions of (10) tend to zero as  $  n \rightarrow \infty $
 
 
when (10) is applied to the equation (11), with  $  q $
 
when (10) is applied to the equation (11), with  $  q $
 
a complex constant in the set
 
a complex constant in the set
Line 333: Line 318:
  
 
where  $  \mathbf C $
 
where  $  \mathbf C $
is the complex plane. An explicit  $  r $-
+
is the complex plane. An explicit  $  r $-step method cannot be  $  A ( 0) $-stable. There exists only one  $  A ( 0) $-stable  $  r $-step method of order  $  r+ 1 $,  
step method cannot be  $  A ( 0) $-
 
stable. There exists only one  $  A ( 0) $-
 
stable  $  r $-
 
step method of order  $  r+ 1 $,  
 
 
namely (12). For  $  \alpha \in [ 0 , \pi / 2 ) $,  
 
namely (12). For  $  \alpha \in [ 0 , \pi / 2 ) $,  
there exist  $  A ( \alpha ) $-
+
there exist  $  A ( \alpha ) $-stable methods of order  $  r $
stable methods of order  $  r $
+
with  $  r = 3, 4 $ (see [[#References|[13]]]).
with  $  r = 3, 4 $(
 
see [[#References|[13]]]).
 
  
The following weakening of the requirement of  $  A $-
+
The following weakening of the requirement of  $  A $-stability is contained in the so-called definition of Gear (see [[#References|[14]]]): (10) is said to be a stiffly-stable method if all solutions of (10) tend to zero as  $  n \rightarrow \infty $
stability is contained in the so-called definition of Gear (see [[#References|[14]]]): (10) is said to be a stiffly-stable method if all solutions of (10) tend to zero as  $  n \rightarrow \infty $
 
 
when (10) is applied to equation (11).  $  Hq $
 
when (10) is applied to equation (11).  $  Hq $
 
is a complex constant in  $  R _ {1} \cup R _ {2} $,  
 
is a complex constant in  $  R _ {1} \cup R _ {2} $,  
Line 362: Line 340:
 
and the given accuracy is guaranteed for  $  Hq \in R _ {1} $.
 
and the given accuracy is guaranteed for  $  Hq \in R _ {1} $.
  
Method (7) is stiffly stable, and thus  $  A ( \alpha ) $-
+
Method (7) is stiffly stable, and thus  $  A ( \alpha ) $-stable. In these methods,  $  a \leq  0.7 $.
stable. In these methods,  $  a \leq  0.7 $.
 
  
 
In accordance with (16), implicit analogues of the explicit Runge–Kutta methods of arbitrary order can be constructed that are moreover  $  A $-
 
In accordance with (16), implicit analogues of the explicit Runge–Kutta methods of arbitrary order can be constructed that are moreover  $  A $-
stable and stiffly stable. For example, the  $  2 $-
+
stable and stiffly stable. For example, the  $  2 $-nd order method
nd order method
 
  
 
$$ \tag{13 }
 
$$ \tag{13 }
y _ {n+} 1 =  y _ {n} + Hf \left ( y _ {n+} 1 -  
+
y _ {n+1}  =  y _ {n} + Hf \left ( y _ {n+1} -  
 
\frac{H}{2}
 
\frac{H}{2}
  f ( y _ {n+} 1 ) \right ) .
+
  f ( y _ {n+1} ) \right ) .
 
$$
 
$$
  
Line 380: Line 356:
 
\left ( E - HA +  
 
\left ( E - HA +  
 
\frac{H  ^ {2} A  ^ {2} }{2}
 
\frac{H  ^ {2} A  ^ {2} }{2}
  \right ) y _ {n+} 1 =  y _ {n} ,
+
  \right ) y _ {n+1}  =  y _ {n} ,
 
$$
 
$$
  
which prove that it is  $  A $-
+
which prove that it is  $  A $-stable. Similarly, for the  $  3 $-rd order method
stable. Similarly, for the  $  3 $-
 
rd order method
 
  
 
$$ \tag{14 }
 
$$ \tag{14 }
y _ {n+} 1 =  y _ {n} +  
+
y _ {n+1}  =  y _ {n} +  
 
\frac{1}{6}
 
\frac{1}{6}
 
  ( K _ {1} + 4K _ {2} + K _ {3} ) ,
 
  ( K _ {1} + 4K _ {2} + K _ {3} ) ,
Line 394: Line 368:
  
 
$$  
 
$$  
K _ {1}  =  Hf ( y _ {n+} 1 ) ,\  K _ {2}  =  Hf
+
K _ {1}  =  Hf ( y _ {n+1} ) ,\  K _ {2}  =  Hf
\left ( y _ {n+} 1 -  
+
\left ( y _ {n+1} -  
 
\frac{K _ {1} }{2}
 
\frac{K _ {1} }{2}
 
  \right ) ,
 
  \right ) ,
Line 401: Line 375:
  
 
$$  
 
$$  
K _ {3}  =  Hf ( y _ {n+} 1 - 2K _ {2} + K _ {1} )
+
K _ {3}  =  Hf ( y _ {n+1} - 2K _ {2} + K _ {1} )
 
$$
 
$$
  
Line 412: Line 386:
 
\frac{H  ^ {3} A  ^ {3} }{6}
 
\frac{H  ^ {3} A  ^ {3} }{6}
 
  \right )
 
  \right )
y _ {n+} 1 =  y _ {n} .
+
y _ {n+1}  =  y _ {n} .
 
$$
 
$$
  
$  A $-
+
$  A $-stable and stiffly-stable methods of higher order are constructed in the same way. Methods (13) and (14), and those published in [[#References|[5]]], are fundamentally different from the so-called implicit Runge–Kutta methods (see [[#References|[16]]]), which have not become widely known because of their greater computational laboriousness. Application of implicit methods requires more computational effort at each step than is the case for explicit methods, but this is justified in stiff differential systems at the expense of a sharp increase in the size of the integration step. To solve (3), it is necessary to invert a matrix or to solve a system of linear equations. Here the problem of ill conditioning may arise, because the number of conditions on the matrix  $  \| E - HA \| $
stable and stiffly-stable methods of higher order are constructed in the same way. Methods (13) and (14), and those published in [[#References|[5]]], are fundamentally different from the so-called implicit Runge–Kutta methods (see [[#References|[16]]]), which have not become widely known because of their greater computational laboriousness. Application of implicit methods requires more computational effort at each step than is the case for explicit methods, but this is justified in stiff differential systems at the expense of a sharp increase in the size of the integration step. To solve (3), it is necessary to invert a matrix or to solve a system of linear equations. Here the problem of ill conditioning may arise, because the number of conditions on the matrix  $  \| E - HA \| $
 
 
increases as  $  H $
 
increases as  $  H $
increases. In the general case of (1), at every integration step it is necessary to solve a non-linear system of equations for the vector  $  y _ {n+} 1 $.  
+
increases. In the general case of (1), at every integration step it is necessary to solve a non-linear system of equations for the vector  $  y _ {n+1} $.  
 
It is common to use a modification of Newton's method with the initial condition calculated from an arbitrary extrapolation formula, using  $  y _ {n} $.  
 
It is common to use a modification of Newton's method with the initial condition calculated from an arbitrary extrapolation formula, using  $  y _ {n} $.  
 
The extrapolation method is then said to be a predictor and the implicit method a corrector. The method of simple iteration is inapplicable in stiff differential systems because of the large values of  $  LH $.  
 
The extrapolation method is then said to be a predictor and the implicit method a corrector. The method of simple iteration is inapplicable in stiff differential systems because of the large values of  $  LH $.  
Since Newton's method is applied in implicit schemes to solve the equations  $  F ( y _ {n+} 1 ) = 0 $,  
+
Since Newton's method is applied in implicit schemes to solve the equations  $  F ( y _ {n+1} ) = 0 $,  
and thus it is necessary to compute the Jacobi matrix of (1), this matrix is sometimes inserted directly into the formulas of the methods that also have the property of  $  A $-
+
and thus it is necessary to compute the Jacobi matrix of (1), this matrix is sometimes inserted directly into the formulas of the methods that also have the property of  $  A $-stability for the solution of linear systems (see [[#References|[12]]], [[#References|[15]]]). Gear's procedure (Gear's method) is widely applied in the solution of stiff differential systems with automatic control of the error at each step, whence it alters the order of the method or the step size. Methods (7) are also used as correctors in Gear's procedure (see [[#References|[9]]]). Another approach to the creation of methods for the integration of stiff systems of equations is connected with a calculation of the corresponding linear systems in the formulas of the solution methods (see [[#References|[4]]]–[[#References|[8]]], [[#References|[10]]]). In the first articles in this direction, stiff systems of equations were discussed that had known eigenvalues for the matrix  $  ( \partial  f ( z) / \partial  z) $,  
stability for the solution of linear systems (see [[#References|[12]]], [[#References|[15]]]). Gear's procedure (Gear's method) is widely applied in the solution of stiff differential systems with automatic control of the error at each step, whence it alters the order of the method or the step size. Methods (7) are also used as correctors in Gear's procedure (see [[#References|[9]]]). Another approach to the creation of methods for the integration of stiff systems of equations is connected with a calculation of the corresponding linear systems in the formulas of the solution methods (see [[#References|[4]]]–[[#References|[8]]], [[#References|[10]]]). In the first articles in this direction, stiff systems of equations were discussed that had known eigenvalues for the matrix  $  ( \partial  f ( z) / \partial  z) $,  
 
 
according to which the matrix entries of the methods were constructed. Because of the difficulty of solving the eigenvalue problem, this direction was not developed for a considerable time. In [[#References|[6]]], [[#References|[7]]], [[#References|[8]]], methods for constructing the matrix entries were proposed that did not require the solution of the eigenvalue problem for the matrix  $  (\partial  f ( z) / \partial  z) $
 
according to which the matrix entries of the methods were constructed. Because of the difficulty of solving the eigenvalue problem, this direction was not developed for a considerable time. In [[#References|[6]]], [[#References|[7]]], [[#References|[8]]], methods for constructing the matrix entries were proposed that did not require the solution of the eigenvalue problem for the matrix  $  (\partial  f ( z) / \partial  z) $
 
of the system (1). The methods in this direction can be constructed on the basis of the following equation (see [[#References|[7]]]):
 
of the system (1). The methods in this direction can be constructed on the basis of the following equation (see [[#References|[7]]]):
  
 
$$ \tag{15 }
 
$$ \tag{15 }
z ( t _ {n+} 1 ) - z ( t _ {n} ) +
+
z ( t _ {n+1} ) - z ( t _ {n} ) +
 
$$
 
$$
  
 
$$  
 
$$  
- \left [ \int\limits _ { 0 } ^ { H }  \phi  ^ {-} 1 ( t _ {n} + \tau )  d \tau + C \right
+
- \left [ \int\limits _ { 0 } ^ { H }  \phi  ^ {-1} ( t _ {n} + \tau )  d \tau + C \right
] \phi ( t _ {n+} 1 ) \left .  
+
] \phi ( t _ {n+1} ) \left .  
 
\frac{dz}{dt}
 
\frac{dz}{dt}
 
  \right | _ {t = t _ {n}  } +
 
  \right | _ {t = t _ {n}  } +
Line 442: Line 414:
 
C \phi ( t _ {n} ) \left .  
 
C \phi ( t _ {n} ) \left .  
 
\frac{dz ( t) }{dt}
 
\frac{dz ( t) }{dt}
  \right | _ {t = t _ {n+} 1 }  
+
  \right | _ {t = t _ {n+1} }  
  =  \int\limits _ { 0 } ^ { H }  \left [ \int\limits _ { 0 } ^  \tau  \phi  ^ {-} 1
+
  =  \int\limits _ { 0 } ^ { H }  \left [ \int\limits _ { 0 } ^  \tau  \phi  ^ {-1}
 
( t _ {n} + \rho )  d \rho + C \right ] \times
 
( t _ {n} + \rho )  d \rho + C \right ] \times
 
$$
 
$$
Line 456: Line 428:
 
   
 
   
 
\frac{dz ( t) }{dt}
 
\frac{dz ( t) }{dt}
  \right ] _ {t = t _ {n+} 1 - \tau }  d \tau ,
+
  \right ] _ {t = t _ {n+1} - \tau }  d \tau ,
 
$$
 
$$
  
 
where  $  \phi ( t _ {n} + \tau ) \in C  ^ {1} $,  
 
where  $  \phi ( t _ {n} + \tau ) \in C  ^ {1} $,  
 
$  0 \leq  \tau \leq  H $,  
 
$  0 \leq  \tau \leq  H $,  
is a non-singular ( $  m \times m $)-
+
is a non-singular ( $  m \times m $)-matrix and  $  C $
matrix and  $  C $
 
 
is a matrix not depending on  $  \tau $.
 
is a matrix not depending on  $  \tau $.
  
Line 479: Line 450:
  
 
$$ \tag{16 }
 
$$ \tag{16 }
y ( t _ {n+} 1 )  =  y ( t _ {n} ) + \int\limits _ { 0 } ^ { H }  e ^ {A \tau }  d \tau
+
y ( t _ {n+1} )  =  y ( t _ {n} ) + \int\limits _ { 0 } ^ { H }  e ^ {A \tau }  d \tau
 
f ( y ( t _ {n} ) ) .
 
f ( y ( t _ {n} ) ) .
 
$$
 
$$
Line 524: Line 495:
 
\frac{hA}{2}
 
\frac{hA}{2}
  
\right )  ^ {-} 1 =  \Phi _ {0} ,  \\
+
\right )  ^ {-1}  =  \Phi _ {0} ,  \\
 
e  ^ {AH}  \cong  \left ( E -  
 
e  ^ {AH}  \cong  \left ( E -  
 
\frac{Ah}{2}
 
\frac{Ah}{2}
  \right )  ^ {-} 1 \left ( E +  
+
  \right )  ^ {-1} \left ( E +  
 
\frac{Ah}{2}
 
\frac{Ah}{2}
 
  \right )  =  E + A \Phi _ {0} ,  \\
 
  \right )  =  E + A \Phi _ {0} ,  \\
Line 548: Line 519:
  
 
$$  
 
$$  
\epsilon _ {n+} 1 =  \left \{ e  ^ {AH} + \int\limits _ { 0 } ^ { H }  e ^ {A \tau } \  
+
\epsilon _ {n+1}  =  \left \{ e  ^ {AH} + \int\limits _ { 0 } ^ { H }  e ^ {A \tau } \  
 
d \tau \left [ \int\limits _ { 0 } ^ { 1 }   
 
d \tau \left [ \int\limits _ { 0 } ^ { 1 }   
 
\frac{\partial  f ( y _ {n} + \rho \epsilon _ {n} ) }{\partial  y _ {n} }
 
\frac{\partial  f ( y _ {n} + \rho \epsilon _ {n} ) }{\partial  y _ {n} }
Line 561: Line 532:
 
  - A  
 
  - A  
 
\frac{dz ( t ) }{dt}
 
\frac{dz ( t ) }{dt}
  \right ] _ {t = t _ {n+} 1 - \tau }  d \tau ,
+
  \right ] _ {t = t _ {n+1} - \tau }  d \tau ,
 
$$
 
$$
  
Line 568: Line 539:
 
$  y _ {n} = y ( t _ {n} ) $
 
$  y _ {n} = y ( t _ {n} ) $
 
are solutions of (1) and (16), respectively. Let  $  \| y \| = \max _ {i}  | y _ {i} | $
 
are solutions of (1) and (16), respectively. Let  $  \| y \| = \max _ {i}  | y _ {i} | $
be the norm of the vector (the norms of the matrices are subordinated to this vector norm), and suppose that the following inequalities are satisfied in  $  \overline{G}\; $:
+
be the norm of the vector (the norms of the matrices are subordinated to this vector norm), and suppose that the following inequalities are satisfied in  $  \overline{G} $:
  
 
$$  
 
$$  
Line 599: Line 570:
  
 
$$  
 
$$  
\| \epsilon _ {n+} 1 \|  \leq  \left ( e  ^ {RH} + \int\limits _ { 0 } ^ { H }  e ^ {R \tau } \  
+
\| \epsilon _ {n+1} \|  \leq  \left ( e  ^ {RH} + \int\limits _ { 0 } ^ { H }  e ^ {R \tau } \  
 
d \tau \cdot \mu _ {1} \right ) \| \epsilon _ {n} \| +
 
d \tau \cdot \mu _ {1} \right ) \| \epsilon _ {n} \| +
 
\int\limits _ { 0 } ^ { H }  \int\limits _ { 0 } ^  \tau  e ^ {R \xi }  d \xi  d \tau \cdot \mu _ {2} l ,
 
\int\limits _ { 0 } ^ { H }  \int\limits _ { 0 } ^  \tau  e ^ {R \xi }  d \xi  d \tau \cdot \mu _ {2} l ,
Line 609: Line 580:
  
 
$$  
 
$$  
R  =  0 ,\  \| \epsilon _ {n+} 1 \|  \leq  ( 1 + \mu _ {1} H )  \| \epsilon _ {n} \| +  
+
R  =  0 ,\  \| \epsilon _ {n+1} \|  \leq  ( 1 + \mu _ {1} H )  \| \epsilon _ {n} \| +  
 
\frac{H  ^ {*} }{2}
 
\frac{H  ^ {*} }{2}
 
  \mu _ {2} l ;
 
  \mu _ {2} l ;
Line 615: Line 586:
  
 
$$  
 
$$  
0  <  \mu _ {1}  \leq  - R  =  \alpha ,\  \| \epsilon _ {n+} 1 \|  \leq  \| \epsilon _ {n} \| +
+
0  <  \mu _ {1}  \leq  - R  =  \alpha ,\  \| \epsilon _ {n+1} \|  \leq  \| \epsilon _ {n} \| +
  
 
\frac{\alpha H + e ^ {- \alpha H } - 1 }{\alpha  ^ {2} }
 
\frac{\alpha H + e ^ {- \alpha H } - 1 }{\alpha  ^ {2} }
Line 622: Line 593:
  
 
$$  
 
$$  
0  \leq  \mu _ {1}  \leq  R ,\  \| \epsilon _ {n+} 1 \|  \leq  e  ^ {2RH}  \| \epsilon _ {n} \| +  
+
0  \leq  \mu _ {1}  \leq  R ,\  \| \epsilon _ {n+1} \|  \leq  e  ^ {2RH}  \| \epsilon _ {n} \| +  
 
\frac{e  ^ {RH} - 1 - RH }{R  ^ {2} }
 
\frac{e  ^ {RH} - 1 - RH }{R  ^ {2} }
 
  \mu _ {2} l;
 
  \mu _ {2} l;
Line 628: Line 599:
  
 
$$  
 
$$  
0  <  R  \leq  \mu _ {1} ,\  \| \epsilon _ {n+} 1 \|  \leq  e ^ {2 \mu _ {1} H }  \| \epsilon _ {n} \| +  
+
0  <  R  \leq  \mu _ {1} ,\  \| \epsilon _ {n+1} \|  \leq  e ^ {2 \mu _ {1} H }  \| \epsilon _ {n} \| +  
 
\frac{e ^ {\mu _ {1} H } - 1 - \mu _ {1} H }{\mu _ {1}  ^ {2} }
 
\frac{e ^ {\mu _ {1} H } - 1 - \mu _ {1} H }{\mu _ {1}  ^ {2} }
 
  \mu _ {2} l .
 
  \mu _ {2} l .
Line 647: Line 618:
 
The checking of the accuracy can be accomplished by using Runge's rule (see [[#References|[1]]]).
 
The checking of the accuracy can be accomplished by using Runge's rule (see [[#References|[1]]]).
  
To increase the accuracy, a class of methods for the numerical integration of systems has been suggested in [[#References|[7]]], based on the method (16). The following requirements are satisfied for the methods in this class: 1) an  $  s $-
+
To increase the accuracy, a class of methods for the numerical integration of systems has been suggested in [[#References|[7]]], based on the method (16). The following requirements are satisfied for the methods in this class: 1) an  $  s $-th order method must be exact for the integration of algebraic polynomials of degree  $  s- 1 $
th order method must be exact for the integration of algebraic polynomials of degree  $  s- 1 $
 
 
when  $  A = 0 $;  
 
when  $  A = 0 $;  
 
and 2) a method of arbitrary order must be exact for the solution of (17).
 
and 2) a method of arbitrary order must be exact for the solution of (17).
  
The one-step  $  2 $-
+
The one-step  $  2 $-nd order method has the form:
nd order method has the form:
 
  
 
$$  
 
$$  
y _ {n+} 1 =  y _ {n} + \int\limits _ { 0 } ^ { {H }  / 2 } e ^ {A \tau }  d \tau \times
+
y _ {n+1}  =  y _ {n} + \int\limits _ { 0 } ^ { {H }  / 2 } e ^ {A \tau }  d \tau \times
 
$$
 
$$
  
Line 673: Line 642:
  
 
$$ \tag{21 }
 
$$ \tag{21 }
y _ {n+} 1 =  y _ {n} + \int\limits _ { 0 } ^ { H }  e ^ {A \tau }  d \tau f ( y _ {n} ) +
+
y _ {n+1}  =  y _ {n} + \int\limits _ { 0 } ^ { H }  e ^ {A \tau }  d \tau f ( y _ {n} ) +
 
$$
 
$$
  
 
$$  
 
$$  
 
+  
 
+  
\int\limits _ { 0 } ^ { H }  e ^ {A \tau }  d \tau [ f ( y _ {n+} 1
+
\int\limits _ { 0 } ^ { H }  e ^ {A \tau }  d \tau [ f ( y _ {n+1}
) - f ( y _ {n} ) - A ( y _ {n+} 1 - y _ {n} ) ] .
+
) - f ( y _ {n} ) - A ( y _ {n+1} - y _ {n} ) ] .
 
$$
 
$$
  

Revision as of 05:30, 24 February 2022


A system of ordinary differential equations in the numerical solution of which by explicit methods of Runge–Kutta or Adams type, the integration step has to remain small despite the slow change in the desired variables. Attempts to reduce the time for calculating the solution of a stiff differential system at the expense of increasing the integration step lead to a sharp increase in the error (error explosion).

An autonomous system of ordinary differential equations

$$ \tag{1 } \frac{d z ( t) }{dt} = f ( z ( t) ) ,\ \ f ( z) \in C ^ {p} ( G) ,\ \ G \subset \mathbf R ^ {m} , $$

is said to be stiff if, for any initial values $ z ( 0) = z ^ {0} \in G $, the following conditions are satisfied on a given interval $ [ 0 , T ] $ contained in the interval of existence of a solution of (1):

a) the maximum modulus of the eigenvalues of the Jacobi matrix (the spectral radius) is bounded along the solution $ z ( t) $:

$$ 0 < L \leq \rho \left ( \frac{\partial f ( z) }{\partial z } \right ) \leq \ \left \| \frac{\partial f ( z) }{\partial z } \right \| = $$

$$ = \ \left \| \left . \frac{\partial K ( t + \tau , t ) }{ \partial \tau } \right | _ {\tau = 0 } \right \| < \infty ,\ 0 \leq t \leq T ; $$

b) there exist numbers $ \tau _ {b} , N , \nu $ such that if

$$ 0 < \tau _ {b} \ll T ,\ 1 \ll N ,\ 1 \leq \nu \leq p , $$

$$ 0 < \tau _ {n} \leq t + \tau _ {n} \leq t + \tau \leq T, $$

then the inequality

$$ \left \| \frac{\partial ^ \nu K ( t + \tau , t ) }{\partial \tau ^ \nu } \right \| \leq \left ( \frac{L}{N} \right ) ^ \nu $$

is satisfied; here

$$ K ( t + \tau , t ) = X ( t + \tau ) X ^ {-1} ( t) , $$

$ X ( t) $ is the fundamental matrix of the system (1),

$$ \| A \| = \max _ { i } \sum _ { k= 1} ^ { m } | a _ {ik} | , $$

and $ \tau _ {b} $ is the length of the boundary layer. All systems of type (1) for which the conditions a) and b) are satisfied simultaneously after scaling the components of the vectors $ z ( t) $ for each solution, are called stiff.

A non-autonomous normal system of ordinary differential equations of order $ m $ is said to be stiff if the autonomous system of order $ m + 1 $ equivalent to it is stiff. The following scalar equation is an example of a stiff non-autonomous equation:

$$ \tag{2 } \frac{dz ( x) }{dx} = q [ z ( x) - \phi ( x) ] + \frac{d \phi ( x) }{dx} ,\ z ( 0) = z ^ {0} , $$

where $ \phi \in C ^ {p} $, $ 0 \leq x < \infty $, is a given function. Another example of a stiff differential system is the linear homogeneous system

$$ \tag{3 } \frac{dz ( t) }{dt} = A z ( t) ,\ \ z ( 0) = z ^ {0} , $$

with constant $ ( m \times m ) $- matrix $ A $ having different eigenvalues divided into two groups:

$$ \tag{4 } \left . \begin{array}{c} | \lambda _ {i} | \leq \beta ,\ \max _ { i } \mathop{\rm Re} \ \lambda _ {i} \leq \alpha ,\ \alpha \geq 0 ,\ 1 \leq i \leq S ; \\ \max _ { i } \mathop{\rm Re} \lambda _ {i} < - \Lambda \ll \ - \beta ,\ \max _ { i } | \lambda _ {i} | = L ,\ \ 1 \ll \frac{L} \beta , \\ S + 1 \leq i \leq m . \\ \end{array} \right \} $$

Condition a) is obviously satisfied for (3). The norm is bounded above by $ C _ {1} \beta e ^ {\alpha \tau } $ for $ \tau \geq \tau _ {b} = C _ {2} / \Lambda $, where $ C _ {1} $ and $ C _ {2} $ are certain constants. When $ C _ {1} \beta e ^ {\alpha t } \ll L $, $ N = L / ( C _ {1} \beta e ^ {\alpha T } ) $, condition b) is satisfied. The vector $ d ^ {2} z ( t) / d t ^ {2} $ is bounded above if $ t \geq \tau _ {b} $:

$$ \left \| \frac{d ^ {2} z ( t) }{d t ^ {2} } \right \| \leq \ \left \| \frac{\partial K ( t , 0 ) }{dt} \right \| \ \left \| \left . \frac{d z ( t) }{dt} \right | _ {t=} 0 \right \| \leq $$

$$ \leq \ C _ {1} \beta e ^ {\alpha T } \left \| \left {} \frac{d z ( t) }{dt} \right | _ {t=0} \right \| . $$

Thus it is possible to construct for $ dz ( t) / dt $ an algebraic interpolation polynomial of degree zero when $ t \geq \tau _ {b} $, with uniformly-distributed nodes, and to express the interpolation step in the form $ H = C _ {3} / C _ {1} \beta e ^ {\alpha T } $, where $ C _ {3} $ is some constant (see [1]) depending on the given error. On the other hand, use of Euler's polygonal method for (3) requires an upper bound on the size of the integration step on the entire interval of a solution of the system (see [1]):

$$ \tag{5 } | 1 + h \lambda _ {i} | < 1 ,\ \ h < - \max _ { i } \frac{2 \mathop{\rm Re} \lambda _ {i} }{| \lambda _ {i} | } \leq \frac{2}{L} , $$

$$ i = s + 1 \dots m . $$

Here the components of the approximate solution to the system (3) by Euler's polygonal method corresponding to the first group of eigenvalues (4) will be represented with sufficient accuracy (see [1]):

$$ h \beta \leq \frac{2 \beta }{L} \ll 1 ,\ \ 1 \ll \frac{N C _ {3} }{2} \leq \frac{H}{h} . $$

Restrictions of the form (5) on the integration step are characteristic for extrapolation methods of Runge–Kutta or Adams type. The quotient $ H / h $, which can be regarded as a qualitative measure of the stiffness of the system (3), attains values of order $ 10 ^ {4} $– $ 10 ^ {10} $ in many cases. The mathematical descriptions of dynamical processes and phenomena in physics, chemistry, biology, technology, and economics connected with the calculation of an ever larger number of factors raising the order of the differential systems, lead to an increase in stiffness. Stiff differential systems require special methods of solution.

In certain cases, the original system (1) can be transformed using the theory of, and asymptotic methods for, differential equations with a small parameter in front of the leading derivatives, the behaviour of the solution of which in the boundary layer is described in terms of exponentially-damped functions (see [2]). However, it is usually difficult to reduce a stiff differential system to a similar form, and, moreover, the stiffness is not always essentially decreased after asymptotic transformation. Thus, numerical methods are used to solve systems of a general form. The main property of these methods is that they guarantee a suppression of the rapidly-damping components of the solution of (1) outside the boundary layer for a value of the integration step used near to that of the algebraic interpolation step.

It is appropriate to use implicit methods (see [9], [17]) in the solution of stiff differential systems. To construct implicit schemes, the method of undetermined coefficients can be used. On the basis of Taylor's formula it can be written in the form

$$ \tag{6 } z ( t _ {n+1} ) = z ( t _ {n} ) + \sum _ {\nu = 1 } ^ { p } \left . (- 1) ^ {\nu + 1 } \frac{d ^ \nu z ( t) }{d t ^ \nu } \right | _ {t = t _ {n+1} } \cdot H \nu + $$

$$ + \int\limits _ { 0 } ^ { H } \left . (- 1) ^ {p+} 1 \frac{\tau ^ {p} }{p!} \frac{d ^ {p+1} z ( t) }{dt ^ {p+1} } \right | _ {t = t _ {n} + \tau } d \tau , $$

where $ n $ is a positive integer, $ H > 0 $, $ n H = t _ {n} $. For example, such methods are described in [9]:

$$ \tag{7 } \sum _ {\chi = 0 } ^ { r } a _ \chi y _ {n + 1 - \chi } = \ H f ( y _ {n+1} ) ,\ y _ {n} = y ( t _ {n} ) , $$

$ r = 1 $, $ a _ {0} = 1 $, $ a _ {1} = - 1 $ — the implicit polygonal method;

$ r = 2 $, $ a _ {0} = 3 / 2 $, $ a _ {1} = - 2 $, $ a _ {2} = 1 / 2 $ — the implicit $ 2 $-nd order method;

$ r = 3 $, $ a _ {0} = 11 / 6 $, $ a _ {1} = - 3 $, $ a _ {2} = 3 / 2 $, $ a _ {3} = - 1 / 3 $ — the implicit $ 3 $-rd order method;

$ r = 4 $, $ a _ {0} = 25 / 12 $, $ a _ {1} = - 4 $, $ a _ {2} = 3 $, $ a _ {3} = - 4 / 3 $, $ a _ {4} = 1 / 4 $ — the implicit 4th order method.

By the order of a method is understood the highest power $ H ^ {k} $ in the expansion (7) in powers of $ H $ whose coefficient is the same as the corresponding coefficient in (6).

Application of the implicit polygonal method to the system (3) leads to the difference equations

$$ \tag{8 } ( E - H A ) y _ {n+1} = y _ {n} ,\ \ y _ {0} = z ^ {0} . $$

Suppose that the system (3) is asymptotically Lyapunov stable. Then the matrix $ \| E - H A \| $ is non-singular for all $ H $. The Lagrange–Sylvester formula can be used to represent the solution of (8) in the form

$$ \tag{9 } y _ {n} = \sum _ { i= 1} ^ { m } P _ {i} ( A , \lambda _ {1} \dots \lambda _ {m} ) \frac{z ^ {0} }{( 1 - H \lambda _ {i} ) ^ {n} } . $$

For the implicit polygonal method (8), the condition of asymptotic stability,

$$ \left | \frac{1}{1 - H \lambda _ {i} } \right | < 1 $$

is satisfied for all $ H $, and in (9) the components of the solution corresponding to the second group of eigenvalues (4) will be rapidly decreasing with increasing $ n $. The value of $ H $ is restricted only by the requirements of the desired accuracy of the approximate solutions. The tendency to raise the order of linear multi-step methods leads to a definite contradiction to their stability (see [11]).

The $ r $-step method

$$ \tag{10 } \sum _ {\chi = 0 } ^ { r } \alpha _ \chi y _ {n+ 1 - \chi } = H \sum _ {\chi = 0 } ^ { r } b _ \chi f ( y _ {n+ 1 - \chi } ) $$

is said to be $ A $-stable if all solutions of (10) tend to zero as $ n \rightarrow \infty $ for a fixed positive $ H $ in the case when (10) is applied to the scalar equation

$$ \tag{11 } \frac{dz}{dt} = qz , $$

with $ q $ a complex constant with negative real part. An explicit $ r $-step method cannot be $ A $-stable. The order of a linear multi-step $ A $-stable method cannot exceed two. The smallest error constant $ c _ \delta = 1 / 12 $ is obtained for the implicit trapezium method (see [11]):

$$ \tag{12 } y _ {n+1} = y _ {n} + \frac{H}{2} [ f ( y _ {n+1} ) + f ( y _ {n} ) ] . $$

The requirement of $ A ( \alpha ) $-stability imposes less rigid restrictions on the methods, unlike $ A $-stability. The $ r $-step method (10) for fixed $ H > 0 $ is said to be $ A ( \alpha ) $-stable if all solutions of (10) tend to zero as $ n \rightarrow \infty $ when (10) is applied to the equation (11), with $ q $ a complex constant in the set

$$ s _ \alpha = \{ {z \in \mathbf C } : {| \mathop{\rm arg} ( - z) | < \alpha ,\ z \neq 0 } \} , $$

where $ \mathbf C $ is the complex plane. An explicit $ r $-step method cannot be $ A ( 0) $-stable. There exists only one $ A ( 0) $-stable $ r $-step method of order $ r+ 1 $, namely (12). For $ \alpha \in [ 0 , \pi / 2 ) $, there exist $ A ( \alpha ) $-stable methods of order $ r $ with $ r = 3, 4 $ (see [13]).

The following weakening of the requirement of $ A $-stability is contained in the so-called definition of Gear (see [14]): (10) is said to be a stiffly-stable method if all solutions of (10) tend to zero as $ n \rightarrow \infty $ when (10) is applied to equation (11). $ Hq $ is a complex constant in $ R _ {1} \cup R _ {2} $, where

$$ R _ {1} = \{ {Hq \in \mathbf C } : {- a \leq \mathop{\rm Re} Hq \leq b ,\ - d \leq \mathop{\rm Im} Hq \leq d } \} , $$

$$ R _ {2} = \{ Hq \in \mathbf C : \mathop{\rm Re} Hq < - a \} , $$

and the given accuracy is guaranteed for $ Hq \in R _ {1} $.

Method (7) is stiffly stable, and thus $ A ( \alpha ) $-stable. In these methods, $ a \leq 0.7 $.

In accordance with (16), implicit analogues of the explicit Runge–Kutta methods of arbitrary order can be constructed that are moreover $ A $- stable and stiffly stable. For example, the $ 2 $-nd order method

$$ \tag{13 } y _ {n+1} = y _ {n} + Hf \left ( y _ {n+1} - \frac{H}{2} f ( y _ {n+1} ) \right ) . $$

Application of (13) to (3) leads to the difference equations

$$ \left ( E - HA + \frac{H ^ {2} A ^ {2} }{2} \right ) y _ {n+1} = y _ {n} , $$

which prove that it is $ A $-stable. Similarly, for the $ 3 $-rd order method

$$ \tag{14 } y _ {n+1} = y _ {n} + \frac{1}{6} ( K _ {1} + 4K _ {2} + K _ {3} ) , $$

$$ K _ {1} = Hf ( y _ {n+1} ) ,\ K _ {2} = Hf \left ( y _ {n+1} - \frac{K _ {1} }{2} \right ) , $$

$$ K _ {3} = Hf ( y _ {n+1} - 2K _ {2} + K _ {1} ) $$

the following difference equations are similarly obtained:

$$ \left ( E - HA + \frac{H ^ {2} A ^ {2} }{2} - \frac{H ^ {3} A ^ {3} }{6} \right ) y _ {n+1} = y _ {n} . $$

$ A $-stable and stiffly-stable methods of higher order are constructed in the same way. Methods (13) and (14), and those published in [5], are fundamentally different from the so-called implicit Runge–Kutta methods (see [16]), which have not become widely known because of their greater computational laboriousness. Application of implicit methods requires more computational effort at each step than is the case for explicit methods, but this is justified in stiff differential systems at the expense of a sharp increase in the size of the integration step. To solve (3), it is necessary to invert a matrix or to solve a system of linear equations. Here the problem of ill conditioning may arise, because the number of conditions on the matrix $ \| E - HA \| $ increases as $ H $ increases. In the general case of (1), at every integration step it is necessary to solve a non-linear system of equations for the vector $ y _ {n+1} $. It is common to use a modification of Newton's method with the initial condition calculated from an arbitrary extrapolation formula, using $ y _ {n} $. The extrapolation method is then said to be a predictor and the implicit method a corrector. The method of simple iteration is inapplicable in stiff differential systems because of the large values of $ LH $. Since Newton's method is applied in implicit schemes to solve the equations $ F ( y _ {n+1} ) = 0 $, and thus it is necessary to compute the Jacobi matrix of (1), this matrix is sometimes inserted directly into the formulas of the methods that also have the property of $ A $-stability for the solution of linear systems (see [12], [15]). Gear's procedure (Gear's method) is widely applied in the solution of stiff differential systems with automatic control of the error at each step, whence it alters the order of the method or the step size. Methods (7) are also used as correctors in Gear's procedure (see [9]). Another approach to the creation of methods for the integration of stiff systems of equations is connected with a calculation of the corresponding linear systems in the formulas of the solution methods (see [4][8], [10]). In the first articles in this direction, stiff systems of equations were discussed that had known eigenvalues for the matrix $ ( \partial f ( z) / \partial z) $, according to which the matrix entries of the methods were constructed. Because of the difficulty of solving the eigenvalue problem, this direction was not developed for a considerable time. In [6], [7], [8], methods for constructing the matrix entries were proposed that did not require the solution of the eigenvalue problem for the matrix $ (\partial f ( z) / \partial z) $ of the system (1). The methods in this direction can be constructed on the basis of the following equation (see [7]):

$$ \tag{15 } z ( t _ {n+1} ) - z ( t _ {n} ) + $$

$$ - \left [ \int\limits _ { 0 } ^ { H } \phi ^ {-1} ( t _ {n} + \tau ) d \tau + C \right ] \phi ( t _ {n+1} ) \left . \frac{dz}{dt} \right | _ {t = t _ {n} } + $$

$$ + C \phi ( t _ {n} ) \left . \frac{dz ( t) }{dt} \right | _ {t = t _ {n+1} } = \int\limits _ { 0 } ^ { H } \left [ \int\limits _ { 0 } ^ \tau \phi ^ {-1} ( t _ {n} + \rho ) d \rho + C \right ] \times $$

$$ \times \left [ \phi ( t _ {n} + \tau ) \frac{d ^ {2} z ( t) }{d t ^ {2} } - \frac{ d \phi ( t _ {n} + \tau ) }{d \tau } \frac{dz ( t) }{dt} \right ] _ {t = t _ {n+1} - \tau } d \tau , $$

where $ \phi ( t _ {n} + \tau ) \in C ^ {1} $, $ 0 \leq \tau \leq H $, is a non-singular ( $ m \times m $)-matrix and $ C $ is a matrix not depending on $ \tau $.

A different choice of the matrices $ \phi ( t _ {n} + \tau ) $ and $ C $ leads to difference equations corresponding to some method of numerical integration if the right-hand side of (15) is disregarded. For $ C = 0 $, explicit methods are obtained, and for $ C \neq 0 $— implicit methods. Suppose that $ \phi ( t _ {n} + \tau ) = E $. Then the implicit polygonal method (7) corresponds to $ C = - HE $; the trapezium method (12) to $ C = - ( H / 2 ) E $; and the explicit polygonal method to $ C = 0 $. When $ \phi ( t _ {n} + \tau ) = e ^ {A \tau } $, where $ A $ is a constant ( $ m \times m $)- matrix, a generalized polygonal method is obtained (see [7]):

$$ \tag{16 } y ( t _ {n+1} ) = y ( t _ {n} ) + \int\limits _ { 0 } ^ { H } e ^ {A \tau } d \tau f ( y ( t _ {n} ) ) . $$

Formula (16) becomes the explicit polygonal method when $ A = 0 $. Method (16) gives an exact solution of the system of equations

$$ \tag{17 } \frac{d z ( t ) }{dt} = A z ( t ) + M ,\ z ( 0) = z ^ {0} ,\ M \in \mathbf R ^ {m} , $$

for the discrete values $ t _ {n} = nH $. If the matrix

$$ \Phi ( A , h ) = \int\limits _ { 0 } ^ { h } e ^ {A \tau } d \tau $$

is known, then $ k $ iterations of the recurrence relation

$$ \tag{18 } \Phi ( A , 2 ^ {q+} 1 h ) = \Phi ( A , 2 ^ {q} h ) [ 2E + A \Phi ( A , 2 ^ {q} h ) ] $$

leads to the matrix

$$ \Phi ( A , 2 ^ {k} h ) = \int\limits _ { 0 } ^ { {2 ^ {k}} h } e ^ {A \tau } d \tau , $$

applied in (16). As a first approximation to (18) for sufficiently-small $ h \leq 1 / \| A \| $, it is appropriate to use the approximation formula

$$ \tag{19 } \left . \begin{array}{c} \int\limits _ { 0 } ^ { h } e ^ {A \tau } d \tau \cong h \left ( E - \frac{hA}{2} \right ) ^ {-1} = \Phi _ {0} , \\ e ^ {AH} \cong \left ( E - \frac{Ah}{2} \right ) ^ {-1} \left ( E + \frac{Ah}{2} \right ) = E + A \Phi _ {0} , \\ \end{array} \right \} $$

or, if the eigenvalues of the matrix are real,

$$ \tag{20 } \int\limits _ { 0 } ^ { h } e ^ {A \tau } d \tau \cong h \sum _ {\gamma = 0 } ^ { s } \frac{A ^ \gamma h ^ \gamma }{( \gamma + 1 )! } = \Phi _ {0} . $$

Formula (19) gives a Lyapunov-stable correspondence between solutions of the differential system (17) and of the difference system (16) when $ A $ has complex eigenvalues with small real part. If the domain $ \overline{G}\; $ of existence of the solutions is closed, convex in $ z $, and if the approximate solutions belong to this domain, then the error in (16) satisfies the following difference equation (see [7]):

$$ \epsilon _ {n+1} = \left \{ e ^ {AH} + \int\limits _ { 0 } ^ { H } e ^ {A \tau } \ d \tau \left [ \int\limits _ { 0 } ^ { 1 } \frac{\partial f ( y _ {n} + \rho \epsilon _ {n} ) }{\partial y _ {n} } d \rho - A \right ] \right \} \epsilon _ {n} + $$

$$ + \int\limits _ { 0 } ^ { H } \int\limits _ { 0 } ^ \tau e ^ {A \xi } d \xi \left [ \frac{d ^ {2} z ( t ) }{d t ^ {2} } - A \frac{dz ( t ) }{dt} \right ] _ {t = t _ {n+1} - \tau } d \tau , $$

where $ \epsilon _ {n} = z _ {n} - y _ {n} $ and $ z _ {n} = z ( t _ {n} ) $, $ y _ {n} = y ( t _ {n} ) $ are solutions of (1) and (16), respectively. Let $ \| y \| = \max _ {i} | y _ {i} | $ be the norm of the vector (the norms of the matrices are subordinated to this vector norm), and suppose that the following inequalities are satisfied in $ \overline{G} $:

$$ \left \| \int\limits _ { 0 } ^ { 1 } \frac{\partial f ( y _ {n} + \rho \epsilon _ {n} ) }{d y _ {n} } d \rho - A \right \| \leq \mu _ {1} , $$

$$ \left \| \frac{\partial f ( z) }{\partial z } - A \right \| \leq \mu _ {2} ,\ \left \| \frac{d z ( t ) }{dt} \right \| \leq l. $$

The following number is computed for a real matrix $ A $:

$$ R = \max _ { i } \left ( a _ {ii} + \sum _ {\begin{array}{c} k= 1 \\ k\neq i \end{array} } ^ { m } | a _ {ik} | \right ) ,\ \| e ^ {At} \| \leq e ^ {Rt} . $$

Then the following estimates hold:

$$ \| \epsilon _ {n+1} \| \leq \left ( e ^ {RH} + \int\limits _ { 0 } ^ { H } e ^ {R \tau } \ d \tau \cdot \mu _ {1} \right ) \| \epsilon _ {n} \| + \int\limits _ { 0 } ^ { H } \int\limits _ { 0 } ^ \tau e ^ {R \xi } d \xi d \tau \cdot \mu _ {2} l , $$

$$ \| \epsilon _ {0} \| = \| \epsilon ( 0) \| ; $$

$$ R = 0 ,\ \| \epsilon _ {n+1} \| \leq ( 1 + \mu _ {1} H ) \| \epsilon _ {n} \| + \frac{H ^ {*} }{2} \mu _ {2} l ; $$

$$ 0 < \mu _ {1} \leq - R = \alpha ,\ \| \epsilon _ {n+1} \| \leq \| \epsilon _ {n} \| + \frac{\alpha H + e ^ {- \alpha H } - 1 }{\alpha ^ {2} } \mu _ {2} l; $$

$$ 0 \leq \mu _ {1} \leq R ,\ \| \epsilon _ {n+1} \| \leq e ^ {2RH} \| \epsilon _ {n} \| + \frac{e ^ {RH} - 1 - RH }{R ^ {2} } \mu _ {2} l; $$

$$ 0 < R \leq \mu _ {1} ,\ \| \epsilon _ {n+1} \| \leq e ^ {2 \mu _ {1} H } \| \epsilon _ {n} \| + \frac{e ^ {\mu _ {1} H } - 1 - \mu _ {1} H }{\mu _ {1} ^ {2} } \mu _ {2} l . $$

If $ 0 \leq \alpha = - R \leq \mu _ {1} $, the error $ \epsilon _ {n} $ can be estimated on the assumption that $ R = 0 $. Other vector norms are possible in the estimates, with corresponding matrix norms and logarithmic norms (see [3]). These estimates prove that in the solution of (1), the integration step $ H $ can be taken significantly larger than in classical methods. The matrix $ A $ must be chosen in such a way that all its entries are close to those of the Jacobi matrix of the system (1). In the boundary layer, when the variables change rapidly, by estimating $ \mu _ {1} $, $ \mu _ {2} $, $ l $, and $ R $ roughly for the approximate solution, it is possible to change $ A $ so as to obtain the necessary accuracy. Since the variables in (1) change slowly across the boundary layer, it often turns out that one matrix $ A $ is sufficient to compute all solutions when $ \tau _ {b} \leq t \leq T $. The checking of the accuracy can be accomplished by using Runge's rule (see [1]).

To increase the accuracy, a class of methods for the numerical integration of systems has been suggested in [7], based on the method (16). The following requirements are satisfied for the methods in this class: 1) an $ s $-th order method must be exact for the integration of algebraic polynomials of degree $ s- 1 $ when $ A = 0 $; and 2) a method of arbitrary order must be exact for the solution of (17).

The one-step $ 2 $-nd order method has the form:

$$ y _ {n+1} = y _ {n} + \int\limits _ { 0 } ^ { {H } / 2 } e ^ {A \tau } d \tau \times $$

$$ \times \left [ 2 f \left ( y _ {n} + \int\limits _ { 0 } ^ { {H } / 2 } e ^ {A \tau } d \tau f ( y _ {n} ) \right ) - A \int\limits _ { 0 } ^ { {H } / 2 } e ^ {A \tau } d \tau f ( y _ {n} ) \right ] . $$

Explicit one-step 3th order methods and multi-step methods for systems are constructed in the same way. Asymptotic estimates for the errors have been found. For $ A = 0 $, these methods become formulas of Runge–Kutta or Adams type. In the approximation to integrals of $ e ^ {At} $ by fractional-linear matrix polynomials in $ A $, taking into account the non-singularity of the corresponding matrices, the methods for systems go over to the explicit formulas of the corresponding implicit methods obtained after iteration by Newton's method. The following is an implicit $ 1 $- st order method for systems:

$$ \tag{21 } y _ {n+1} = y _ {n} + \int\limits _ { 0 } ^ { H } e ^ {A \tau } d \tau f ( y _ {n} ) + $$

$$ + \int\limits _ { 0 } ^ { H } e ^ {A \tau } d \tau [ f ( y _ {n+1} ) - f ( y _ {n} ) - A ( y _ {n+1} - y _ {n} ) ] . $$

It is obtained from (15) if one chooses

$$ \phi ( t _ {n} + \tau ) = e ^ {A \tau } ,\ C = - \int\limits _ { 0 } ^ { H } e ^ {- A \tau } d \tau . $$

The equations (21) are obtained by the method of reducing differential equations to integral equations (see [8]). The integral of $ e ^ {A \tau } $ in (21) is calculated according to (18) with the initial condition (20). A method of correction of this integral in the course of solving has been investigated (see ).

References

[1] N.S. Bakhvalov, "Numerical methods: analysis, algebra, ordinary differential equations" , MIR (1977) (Translated from Russian)
[2] A.B. Vasil'eva, "Constructions of uniform approximations to solutions of systems of differential equations with small parameter in front of the leading derivative" Mat. Sb. , 50 (1960) pp. 43–58 (In Russian)
[3] B.F. Bylov, R.E. Vinograd, D.M. Grobman, V.V. Nemytskii, "The theory of Lyapunov exponents and its applications to problems of stability" , Moscow (1966) (In Russian)
[4] M.K. Gavurin, "An experiment in the numerical integration of ordinary differential equations" Met. Vychisl. , 1 (1963) pp. 45–51 (In Russian)
[5] Yu.V. Rakitskii, "Asymptotic error formulas for solutions of systems of ordinary differential equations by functional numerical methods" Soviet Math. Dokl. , 11 : 4 (1970) pp. 861–863 Dokl. Akad. Nauk SSSR , 193 : 1 (1970) pp. 40–42
[6] Yu.V. Rakitskii, "Methods for successive step increase in the numerical integration of systems of ordinary differential equations" Soviet Math. Dokl. , 13 : 6 (1972) pp. 1624–1627 Dokl. Akad. Nauk SSSR , 207 : 4 (1972) pp. 793–795
[7] Yu.V. Rakitskii, Trudy Leningrad. Polytech. Inst. , 332 (1973) pp. 88–97
[8] B.V. Pavlov, A.Ya. Povzner, "A method for the numerical integration of systems of ordinary differential equations" USSR Comp. Math. Math. Phys. , 13 : 4 (1973) pp. 292–297 Zh. Vychisl. Mat. i Mat. Fiz. , 13 : 4 (1973) pp. 1056–1059
[9] C.F. Curtiss, J.O. Hirschfelder, Proc. Nat. Acad. Sci. USA , 38 (1962) pp. 235–243
[10] R.H.S. Mah, S. Michaelson, R.W. Sargent, J. Chem. Eng. Sci. , 17 (1962) pp. 619–639
[11] G. Dahlquist, "A special stability problem for linear multistep methods" Nordisk. Tidskr. Informationsbehandling , 3 (1963) pp. 27–43
[12] H.H. Rosenbrock, "Some general implicit processes for the numerical solution of differential equations" Comput. J. , 5 (1963) pp. 329–330
[13] O.B. Widlund, "A note on unconditionally stable linear multistep methods" Nordisk. Tidskr. Informationsbehandling , 7 (1967) pp. 65–70
[14] C.W. Gear, "The automatic integration of stiff ordinary differential equations (with discussion)" , Information processing 68 , 1 , North-Holland (1969) pp. 187–193
[15] J.D. Lambert, S.T. Sigurdsson, "Multistep methods with variable matrix coefficients" SIAM J. Numer. Anal. , 9 (1972) pp. 715–733
[16] J.D. Lambert, "Computational methods in ordinary differential equations" , Wiley (1973)
[17] R.A. Willoughby (ed.) , Stiff differential systems , The IBM research symposia series , Plenum (1974)

Comments

The use of asymptotic methods for the system (1) as described in [2] has also been investigated in [a1]. The development of numerical methods for stiff differential equations was stimulated by [a2]. A number of theses has recently been devoted to this subject. For an introduction to the theory of stiff differential equations, [a3] can be used as reference.

References

[a1] J.E. Flaherty, R.E. O'Malley, "Numerical methods for stiff systems of two-point boundary value problems" SIAM J. Sci. and Statist. Comp. , 5 (1984) pp. 865–886
[a2] W. Lininger, R.A. Willoughby, "Efficient integration methods for stiff systems of ordinary differential equations" SIAM J. Numer. Anal. , 7 (1970) pp. 47–66
[a3] K. Dekker, J.G. Verwer, "Stability of Runge–Kutta methods for stiff nonlinear differential equations" , CWI Monographs , 2 , North-Holland (1984)
[a4] W.L. Mirankar, "Numerical methods for stiff equations" , Reidel (1981)
How to Cite This Entry:
Stiff differential system. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Stiff_differential_system&oldid=52108
This article was adapted from an original article by Yu.V. Rakitskii (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article