Namespaces
Variants
Actions

Difference between revisions of "Linear system of differential equations with periodic coefficients"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
(latex details)
 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
A system of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l0594901.png" /> linear ordinary differential equations of the form
+
<!--
 +
l0594901.png
 +
$#A+1 = 267 n = 0
 +
$#C+1 = 267 : ~/encyclopedia/old_files/data/L059/L.0509490 Linear system of differential equations with periodic coefficients
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l0594902.png" /></td> <td valign="top" style="width:5%;text-align:right;">(1)</td></tr></table>
+
{{TEX|auto}}
 +
{{TEX|done}}
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l0594903.png" /> is a real variable, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l0594904.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l0594905.png" /> are complex-valued functions, and
+
A system of  $  n $
 +
linear ordinary differential equations of the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l0594906.png" /></td> <td valign="top" style="width:5%;text-align:right;">(2)</td></tr></table>
+
$$ \tag{1 }
 +
\left .  
 +
\begin{array}{c}
  
The number <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l0594907.png" /> is called the period of the coefficients of the system (1). It is convenient to write (1) as one vector equation
+
\frac{d x _ {1} }{d t }
 +
  = \
 +
\alpha _ {11} ( t) x _ {1} + \dots + \alpha _ {1n} ( t) x _ {n} ,  \\
 +
{\dots \dots \dots \dots \dots }  \\
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l0594908.png" /></td> <td valign="top" style="width:5%;text-align:right;">(3)</td></tr></table>
+
\frac{d x _ {n} }{dt}
 +
  = \alpha _ {n1} ( t) x _ {1} + \dots +
 +
\alpha _ {nn} ( t) x _ {n} ,  \\
 +
\end{array}
 +
\right \}
 +
$$
 +
 
 +
where  $  t $
 +
is a real variable,  $  \alpha _ {jh} ( t) $
 +
and  $  x _ {h} = x _ {h} ( t) $
 +
are complex-valued functions, and
 +
 
 +
$$ \tag{2 }
 +
\alpha _ {jh} ( t + T )  = \alpha _ {jh} ( t) \ \
 +
\textrm{ for  any  }  j , h .
 +
$$
 +
 
 +
The number  $  T > 0 $
 +
is called the period of the coefficients of the system (1). It is convenient to write (1) as one vector equation
 +
 
 +
$$ \tag{3 }
 +
 
 +
\frac{dx}{dt}
 +
  = A ( t) x ,
 +
$$
  
 
where
 
where
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l0594909.png" /></td> </tr></table>
+
$$
 +
x  ^ {T}  = ( x _ {1} \dots x _ {n} ) ,\ \
 +
A ( t)  = \| \alpha _ {jh} ( t) \| ,\ \
 +
j , h = 1 \dots n .
 +
$$
 +
 
 +
It is assumed that the functions  $  \alpha _ {jh} ( t) $
 +
are defined for  $  t \in \mathbf R $
 +
and are measurable and Lebesgue integrable on  $  [ 0 , T ] $,
 +
and that the equalities (2) are satisfied almost-everywhere, that is,  $  A ( t + T ) = A ( t) $.
 +
A solution of (3) is a vector function  $  x = x ( t) $
 +
with absolutely-continuous components such that (3) is satisfied almost-everywhere. Suppose that  $  t _ {0} \in \mathbf R $
 +
and  $  a $
 +
are an (arbitrarily) given number and vector. A solution  $  x ( t) $
 +
satisfying the condition  $  x ( t _ {0} ) = a $
 +
exists and is uniquely determined. A matrix  $  X ( t) $
 +
of order  $  n $
 +
with absolutely-continuous entries is called the matrizant (or evolution matrix, or transition matrix, or [[Cauchy matrix|Cauchy matrix]]) of (3) if almost-everywhere on  $  \mathbf R $
 +
one has
  
It is assumed that the functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949010.png" /> are defined for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949011.png" /> and are measurable and Lebesgue integrable on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949012.png" />, and that the equalities (2) are satisfied almost-everywhere, that is, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949013.png" />. A solution of (3) is a vector function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949014.png" /> with absolutely-continuous components such that (3) is satisfied almost-everywhere. Suppose that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949015.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949016.png" /> are an (arbitrarily) given number and vector. A solution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949017.png" /> satisfying the condition <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949018.png" /> exists and is uniquely determined. A matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949019.png" /> of order <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949020.png" /> with absolutely-continuous entries is called the matrizant (or evolution matrix, or transition matrix, or [[Cauchy matrix|Cauchy matrix]]) of (3) if almost-everywhere on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949021.png" /> one has
+
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949022.png" /></td> </tr></table>
+
\frac{dX}{dt}
 +
  = A ( t) X
 +
$$
  
and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949023.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949024.png" /> is the unit <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949025.png" /> matrix. The transition matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949026.png" /> satisfies the relation
+
and $  X ( 0) = I $,  
 +
where $  I $
 +
is the unit $  n \times n $
 +
matrix. The transition matrix $  X ( t) $
 +
satisfies the relation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949027.png" /></td> </tr></table>
+
$$
 +
X ( t + T )  = X ( t) X ( T) ,\ \
 +
t \in \mathbf R .
 +
$$
  
The matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949028.png" /> is called the monodromy matrix, and its eigen values <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949029.png" /> are called the multipliers of (3). The equation
+
The matrix $  X ( T) $
 +
is called the monodromy matrix, and its eigen values $  \rho _ {j} $
 +
are called the multipliers of (3). The equation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949030.png" /></td> <td valign="top" style="width:5%;text-align:right;">(4)</td></tr></table>
+
$$ \tag{4 }
 +
\mathop{\rm det}  [ X ( T) - \rho I ]  = 0
 +
$$
  
for the multipliers <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949031.png" /> is called the characteristic equation of equation (3) (or of the system (1)). To every eigen vector <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949032.png" /> of the monodromy matrix with multiplier <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949033.png" /> corresponds a solution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949034.png" /> of (3) satisfying the condition
+
for the multipliers $  \rho _ {j} $
 +
is called the characteristic equation of equation (3) (or of the system (1)). To every eigen vector $  a  ^ {(0)} $
 +
of the monodromy matrix with multiplier $  \rho _ {0} $
 +
corresponds a solution $  x  ^ {(0)} ( t) = X ( t) a  ^ {(0)} $
 +
of (3) satisfying the condition
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949035.png" /></td> </tr></table>
+
$$
 +
x  ^ {(0)} ( t + T )  = \rho _ {0} x  ^ {(0)} ( t) .
 +
$$
  
The Floquet–Lyapunov theorem holds: The transition matrix of (3) with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949036.png" />-periodic matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949037.png" /> can be represented in the form
+
The Floquet–Lyapunov theorem holds: The transition matrix of (3) with $  T $-
 +
periodic matrix $  A ( t) $
 +
can be represented in the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949038.png" /></td> <td valign="top" style="width:5%;text-align:right;">(5)</td></tr></table>
+
$$ \tag{5 }
 +
X ( t)  = F ( t) e  ^ {tK} ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949039.png" /> is a constant matrix and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949040.png" /> is an absolutely-continuous matrix function, periodic with period <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949041.png" />, non-singular for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949042.png" />, and such that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949043.png" />. Conversely, if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949044.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949045.png" /> are matrices with the given properties, then the matrix (5) is the transition matrix of an equation (3) with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949046.png" />-periodic matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949047.png" />. The matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949048.png" />, called the indicator matrix, and the matrix function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949049.png" /> in the representation (5) are not uniquely determined. In the case of real coefficients <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949050.png" /> in (5), <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949051.png" /> is a real matrix, but <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949052.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949053.png" /> are complex matrices, generally speaking. For this case there is a refinement of the Floquet–Lyapunov theorem: The transition matrix of (3) with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949054.png" />-periodic real matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949055.png" /> can be represented in the form (5), where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949056.png" /> is a constant real matrix and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949057.png" /> is a real absolutely-continuous matrix function, non-singular for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949058.png" />, satisfying the relations
+
where $  K $
 +
is a constant matrix and $  F ( t) $
 +
is an absolutely-continuous matrix function, periodic with period $  T $,  
 +
non-singular for all $  t \in \mathbf R $,  
 +
and such that $  F ( 0) = I $.  
 +
Conversely, if $  F ( t) $
 +
and $  K $
 +
are matrices with the given properties, then the matrix (5) is the transition matrix of an equation (3) with $  T $-
 +
periodic matrix $  A ( t) $.  
 +
The matrix $  K $,  
 +
called the indicator matrix, and the matrix function $  F ( t) $
 +
in the representation (5) are not uniquely determined. In the case of real coefficients $  \alpha _ {jh} ( t) $
 +
in (5), $  X ( t) $
 +
is a real matrix, but $  F ( t) $
 +
and $  K $
 +
are complex matrices, generally speaking. For this case there is a refinement of the Floquet–Lyapunov theorem: The transition matrix of (3) with $  T $-
 +
periodic real matrix $  A ( t) $
 +
can be represented in the form (5), where $  K $
 +
is a constant real matrix and $  F ( t) $
 +
is a real absolutely-continuous matrix function, non-singular for all $  t $,  
 +
satisfying the relations
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949059.png" /></td> </tr></table>
+
$$
 +
F ( t + T )  = F ( t) L ,\ \
 +
F ( 0)  = I ,\  K L  = L K ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949060.png" /> is a real matrix such that
+
where $  L $
 +
is a real matrix such that
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949061.png" /></td> </tr></table>
+
$$
 +
L  ^ {2}  = I .
 +
$$
  
In particular, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949062.png" />. Conversely, if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949063.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949064.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949065.png" /> are arbitrary matrices with the given properties, then (5) is the transition matrix of an equation (3) with a <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949066.png" />-periodic real matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949067.png" />.
+
In particular, $  F ( t + 2 T ) = F ( t) $.  
 +
Conversely, if $  F ( t) $,  
 +
$  K $
 +
and $  L $
 +
are arbitrary matrices with the given properties, then (5) is the transition matrix of an equation (3) with a $  T $-
 +
periodic real matrix $  A ( t) $.
  
 
An immediate consequence of (5) is Floquet's theorem, which asserts that equation (3) has a [[Fundamental system of solutions|fundamental system of solutions]] splitting into subsets, each of which has the form
 
An immediate consequence of (5) is Floquet's theorem, which asserts that equation (3) has a [[Fundamental system of solutions|fundamental system of solutions]] splitting into subsets, each of which has the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949068.png" /></td> </tr></table>
+
$$
 +
x  ^ {(1)} ( t)  = e ^ {\lambda t } u _ {1} ( t) ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949069.png" /></td> </tr></table>
+
$$
 +
x  ^ {(2)} ( t)  = e ^ {\lambda t } [ t u _ {1} ( t) + u _ {2} ( t) ] ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949070.png" /></td> </tr></table>
+
$$
 +
{\dots \dots \dots \dots \dots }
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949071.png" /></td> </tr></table>
+
$$
 +
x  ^ {(m)} ( t)  = e ^ {\lambda t } \left [
 +
\frac{t  ^ {m-1} }{( m -
 +
1 ) ! }
 +
u _ {1} ( t) + \dots + t u _ {m-1} ( t) + u _ {m} ( t) \right ] ,
 +
$$
  
where the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949072.png" /> are absolutely-continuous <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949073.png" />-periodic (generally speaking, complex-valued) vector functions. (The given subset of solutions corresponds to one <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949074.png" />-cell of the Jordan form of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949075.png" />.) If all elementary divisors of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949076.png" /> are simple (in particular, if all roots of the characteristic equation (4) are simple), then there is a fundamental system of solutions of the form
+
where the $  u _ {j} ( t) $
 +
are absolutely-continuous $  T $-
 +
periodic (generally speaking, complex-valued) vector functions. (The given subset of solutions corresponds to one $  ( m \times m ) $-
 +
cell of the Jordan form of $  K $.)  
 +
If all elementary divisors of $  K $
 +
are simple (in particular, if all roots of the characteristic equation (4) are simple), then there is a fundamental system of solutions of the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949077.png" /></td> </tr></table>
+
$$
 +
x  ^ {(j)} ( t)  = e ^ {\lambda _ {j} t } u _ {j} ( t) ,\ \
 +
u _ {j} ( t + T )  = u _ {j} ( t) ,\ \
 +
j = 1 \dots n .
 +
$$
  
 
Formula (5) implies that (3) is reducible (see [[Reducible linear system|Reducible linear system]]) to the equation
 
Formula (5) implies that (3) is reducible (see [[Reducible linear system|Reducible linear system]]) to the equation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949078.png" /></td> </tr></table>
+
$$
 +
 
 +
\frac{dy}{dt}
 +
  = K y
 +
$$
  
by means of the change of variable <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949079.png" /> (Lyapunov's theorem).
+
by means of the change of variable $  x = F ( t) y $(
 +
Lyapunov's theorem).
  
Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949080.png" /> be the multipliers of equation (3) and let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949081.png" /> be an arbitrary indicator matrix, that is,
+
Let $  \rho _ {1} \dots \rho _ {n} $
 +
be the multipliers of equation (3) and let $  K $
 +
be an arbitrary indicator matrix, that is,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949082.png" /></td> <td valign="top" style="width:5%;text-align:right;">(6)</td></tr></table>
+
$$ \tag{6 }
 +
e  ^ {TK}  = X ( T) .
 +
$$
  
The eigen values <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949083.png" /> of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949084.png" /> are called the characteristic exponents (cf. [[Characteristic exponent|Characteristic exponent]]) of (3). From (6) one obtains <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949085.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949086.png" />. The characteristic exponent <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949087.png" /> can be defined as the complex number for which (3) has a solution that is representable in the form
+
The eigen values $  \lambda _ {1} \dots \lambda _ {n} $
 +
of $  K $
 +
are called the characteristic exponents (cf. [[Characteristic exponent|Characteristic exponent]]) of (3). From (6) one obtains $  e ^ {T \lambda _ {j} } = \rho _ {j} $,  
 +
$  j = 1 \dots n $.  
 +
The characteristic exponent $  \lambda $
 +
can be defined as the complex number for which (3) has a solution that is representable in the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949088.png" /></td> </tr></table>
+
$$
 +
x ( t)  = e ^ {\lambda t } u ( t) ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949089.png" /> is a <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949090.png" />-periodic vector-valued function. The main properties of the solutions in which one is usually interested in applications are determined by the characteristic exponents or multipliers of the given equation (see the Table).''''''<table border="0" cellpadding="0" cellspacing="0" style="background-color:black;"> <tr><td> <table border="0" cellspacing="1" cellpadding="4" style="background-color:black;"> <tbody> <tr> <td colname="1" style="background-color:white;" colspan="1">Property of the solutions</td> <td colname="2" style="background-color:white;" colspan="1">Characteristic exponents</td> <td colname="3" style="background-color:white;" colspan="1">Multipliers</td> </tr> <tr> <td colname="1" style="background-color:white;" colspan="1">Stability of the trivial solution (boundedness on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949091.png" /> of all solutions)</td> <td colname="2" style="background-color:white;" colspan="1">Real parts non-positive; if zero or purely imaginary characteristic exponents are present, then simple elementary divisors of the indicator matrix correspond to them</td> <td colname="3" style="background-color:white;" colspan="1">Situated inside or on the unit circle; in the latter case simple elementary divisors of the monodromy matrix correspond to them</td> </tr> <tr> <td colname="1" style="background-color:white;" colspan="1">Asymptotic stability of the trivial solution (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949092.png" /> as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949093.png" /> for any solution)</td> <td colname="2" style="background-color:white;" colspan="1">Real parts negative</td> <td colname="3" style="background-color:white;" colspan="1">Situated inside the unit circle</td> </tr> <tr> <td colname="1" style="background-color:white;" colspan="1">Boundedness of all solutions on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949094.png" /></td> <td colname="2" style="background-color:white;" colspan="1">Purely imaginary with simple elementary divisors of the indicator matrix</td> <td colname="3" style="background-color:white;" colspan="1">Situated on the unit circle; simple elementary divisors of the monodromy matrix correspond to them</td> </tr> <tr> <td colname="1" style="background-color:white;" colspan="1">Instability of the trivial solution (existence of solutions unbounded on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949095.png" />)</td> <td colname="2" style="background-color:white;" colspan="1">There is either a characteristic exponent with positive real part or a purely imaginary one (in particular, zero) with non-simple elementary divisor of the indicator matrix</td> <td colname="3" style="background-color:white;" colspan="1">There is a multiplier either outside the unit circle or one on the unit circle with a non-simple elementary divisor of the monodromy matrix</td> </tr> <tr> <td colname="1" style="background-color:white;" colspan="1">Existence of a <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949096.png" />-periodic solution</td> <td colname="2" style="background-color:white;" colspan="1">For some characteristic exponent <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949097.png" /> one has <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949098.png" /> (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l05949099.png" /> is an integer)</td> <td colname="3" style="background-color:white;" colspan="1">One of the multipliers is equal to one</td> </tr> <tr> <td colname="1" style="background-color:white;" colspan="1">Existence of a semi-periodic solution, i.e. a solution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490100.png" /> for which <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490101.png" /> for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490102.png" /></td> <td colname="2" style="background-color:white;" colspan="1">For some characteristic exponent <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490103.png" /> one has <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490104.png" /> (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490105.png" /> is an integer)</td> <td colname="3" style="background-color:white;" colspan="1">There is a multiplier <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490106.png" /></td> </tr> </tbody> </table>
+
where $  u ( t) $
 +
is a $  T $-
 +
periodic vector-valued function. The main properties of the solutions in which one is usually interested in applications are determined by the characteristic exponents or multipliers of the given equation (see the Table).<table border="0" cellpadding="0" cellspacing="0" style="background-color:black;"> <tr><td> <table border="0" cellspacing="1" cellpadding="4" style="background-color:black;"> <tbody> <tr> <td colname="1" style="background-color:white;" colspan="1">Property of the solutions</td> <td colname="2" style="background-color:white;" colspan="1">Characteristic exponents</td> <td colname="3" style="background-color:white;" colspan="1">Multipliers</td> </tr> <tr> <td colname="1" style="background-color:white;" colspan="1">Stability of the trivial solution (boundedness on $  ( 0 , \infty ) $
 +
of all solutions)</td> <td colname="2" style="background-color:white;" colspan="1">Real parts non-positive; if zero or purely imaginary characteristic exponents are present, then simple elementary divisors of the indicator matrix correspond to them</td> <td colname="3" style="background-color:white;" colspan="1">Situated inside or on the unit circle; in the latter case simple elementary divisors of the monodromy matrix correspond to them</td> </tr> <tr> <td colname="1" style="background-color:white;" colspan="1">Asymptotic stability of the trivial solution ( $  | x ( t) | \rightarrow 0 $
 +
as $  t \rightarrow \infty $
 +
for any solution)</td> <td colname="2" style="background-color:white;" colspan="1">Real parts negative</td> <td colname="3" style="background-color:white;" colspan="1">Situated inside the unit circle</td> </tr> <tr> <td colname="1" style="background-color:white;" colspan="1">Boundedness of all solutions on $  ( - \infty , + \infty ) $
 +
</td> <td colname="2" style="background-color:white;" colspan="1">Purely imaginary with simple elementary divisors of the indicator matrix</td> <td colname="3" style="background-color:white;" colspan="1">Situated on the unit circle; simple elementary divisors of the monodromy matrix correspond to them</td> </tr> <tr> <td colname="1" style="background-color:white;" colspan="1">Instability of the trivial solution (existence of solutions unbounded on $  ( 0 , \infty ) $)
 +
</td> <td colname="2" style="background-color:white;" colspan="1">There is either a characteristic exponent with positive real part or a purely imaginary one (in particular, zero) with non-simple elementary divisor of the indicator matrix</td> <td colname="3" style="background-color:white;" colspan="1">There is a multiplier either outside the unit circle or one on the unit circle with a non-simple elementary divisor of the monodromy matrix</td> </tr> <tr> <td colname="1" style="background-color:white;" colspan="1">Existence of a $  T $-
 +
periodic solution</td> <td colname="2" style="background-color:white;" colspan="1">For some characteristic exponent $  \lambda _ {j} $
 +
one has $  \lambda _ {j} T = 2 \pi i m $(
 +
$  m $
 +
is an integer)</td> <td colname="3" style="background-color:white;" colspan="1">One of the multipliers is equal to one</td> </tr> <tr> <td colname="1" style="background-color:white;" colspan="1">Existence of a semi-periodic solution, i.e. a solution $  x ( t) $
 +
for which $  x ( t + T ) = - x ( t) $
 +
for all $  t $
 +
</td> <td colname="2" style="background-color:white;" colspan="1">For some characteristic exponent $  \lambda _ {j} $
 +
one has $  \lambda _ {j} T = ( 2 m + 1 ) \pi i $(
 +
$  m $
 +
is an integer)</td> <td colname="3" style="background-color:white;" colspan="1">There is a multiplier $  \rho = - 1 $
 +
</td> </tr> </tbody> </table>
  
 
</td></tr> </table>
 
</td></tr> </table>
  
In applications, the coefficients of (1) often depend on parameters; in the parameter space one must distinguish the domains at whose points the solutions of (1) have desired properties (usually these are the first four properties mentioned in the Table, or the fact that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490107.png" /> with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490108.png" /> given). These problems thus reduce to the calculation or estimation of the characteristic exponents (multipliers) of (1).
+
In applications, the coefficients of (1) often depend on parameters; in the parameter space one must distinguish the domains at whose points the solutions of (1) have desired properties (usually these are the first four properties mentioned in the Table, or the fact that $  | x ( t) | \leq  \textrm{ const }  e ^ {- \alpha t } $
 +
with $  \alpha $
 +
given). These problems thus reduce to the calculation or estimation of the characteristic exponents (multipliers) of (1).
  
 
The equation
 
The equation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490109.png" /></td> <td valign="top" style="width:5%;text-align:right;">(7)</td></tr></table>
+
$$ \tag{7 }
 +
 
 +
\frac{dx}{dt}
 +
  = A ( t) x + f ( t) ,
 +
$$
 +
 
 +
where  $  A ( t) $
 +
and  $  f ( t) $
 +
are a measurable  $  T $-
 +
periodic matrix function and vector function, respectively, that are Lebesgue integrable on  $  [ 0 , T ] $(
 +
$  A ( t + T ) = A ( t) $,
 +
$  f ( t + T ) = f ( t) $
 +
almost-everywhere), is called an  "inhomogeneous linear ordinary differential equation with periodic coefficientsinhomogeneous linear ordinary differential equation with periodic coefficients" . If the corresponding homogeneous equation
 +
 
 +
$$ \tag{8 }
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490110.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490111.png" /> are a measurable <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490112.png" />-periodic matrix function and vector function, respectively, that are Lebesgue integrable on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490113.png" /> (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490114.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490115.png" /> almost-everywhere), is called an  "inhomogeneous linear ordinary differential equation with periodic coefficientsinhomogeneous linear ordinary differential equation with periodic coefficients" . If the corresponding homogeneous equation
+
\frac{dy}{dt}
 +
  = A ( t) y
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490116.png" /></td> <td valign="top" style="width:5%;text-align:right;">(8)</td></tr></table>
+
does not have  $  T $-
 +
periodic solutions, then (7) has a unique  $  T $-
 +
periodic solution. It can be determined by the formula
  
does not have <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490117.png" />-periodic solutions, then (7) has a unique <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490118.png" />-periodic solution. It can be determined by the formula
+
$$
 +
x ( t)  = [ I - R ( t , 0 ) ]  ^ {-1}
 +
\int\limits _ { 0 } ^ { T }  R ( t , T - \tau ) f ( t - \tau ) d \tau ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490119.png" /></td> </tr></table>
+
where  $  R ( t , s ) = Y ( t+ T ) Y ( t + T )  ^ {-1} $
 +
and  $  Y ( t) $
 +
is the transition matrix of the homogeneous equation (8), where  $  R ( t+ T , s ) = R ( t , s ) $,
 +
$  \mathop{\rm det} [ I - R ( t , 0 ) ] \neq 0 $.
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490120.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490121.png" /> is the transition matrix of the homogeneous equation (8), where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490122.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490123.png" />.
+
Suppose that (8) has  $  d \geq  1 $
 +
linearly independent  $  T $-
 +
periodic solutions  $  y _ {1} ( t) \dots y _ {d} ( t) $.  
 +
Then the adjoint equation
  
Suppose that (8) has <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490124.png" /> linearly independent <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490125.png" />-periodic solutions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490126.png" />. Then the adjoint equation
+
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490127.png" /></td> </tr></table>
+
\frac{dz}{dt}
 +
  = - A ( t)  ^ {*} z
 +
$$
  
also has <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490128.png" /> linearly independent <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490129.png" />-periodic solutions, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490130.png" />. The inhomogeneous equation (7) has a <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490131.png" />-periodic solution if and only if that the orthogonality relations
+
also has $  d $
 +
linearly independent $  T $-
 +
periodic solutions, $  z _ {1} ( t) \dots z _ {d} ( t) $.  
 +
The inhomogeneous equation (7) has a $  T $-
 +
periodic solution if and only if that the orthogonality relations
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490132.png" /></td> <td valign="top" style="width:5%;text-align:right;">(9)</td></tr></table>
+
$$ \tag{9 }
 +
\int\limits _ { 0 } ^ { T }  ( f ( t) , z _ {j} ( t) )  dt  = 0 ,\  j = 1 \dots d ,
 +
$$
  
hold. If so, an arbitrary <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490133.png" />-periodic solution of (7) has the form
+
hold. If so, an arbitrary $  T $-
 +
periodic solution of (7) has the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490134.png" /></td> </tr></table>
+
$$
 +
x ( t)  = x ^ {( 0 ) } ( t) + \gamma _ {1} y _ {1} ( t) + \dots +
 +
\gamma _ {d} y _ {d} ( t) ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490135.png" /> are arbitrary numbers and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490136.png" /> is a <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490137.png" />-periodic solution of (7). Under the additional conditions
+
where $  \gamma _ {1} \dots \gamma _ {d} $
 +
are arbitrary numbers and $  x  ^ {(0)} ( t) $
 +
is a $  T $-
 +
periodic solution of (7). Under the additional conditions
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490138.png" /></td> </tr></table>
+
$$
 +
\int\limits _ { 0 } ^ { T }  ( x ( t) , y _ {j} ( t) )  dt  = 0 ,\  j = 1 \dots d ,
 +
$$
  
the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490139.png" />-periodic solution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490140.png" /> is determined uniquely; moreover, there is a constant <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490141.png" />, independent of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490142.png" />, such that
+
the $  T $-
 +
periodic solution $  x ( t) $
 +
is determined uniquely; moreover, there is a constant $  \theta > 0 $,  
 +
independent of $  f ( t) $,  
 +
such that
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490143.png" /></td> </tr></table>
+
$$
 +
| x ( t) |  \leq  \theta \left ( \int\limits _ { 0 } ^ { T }  | f ( s) |  ^ {2}  ds
 +
\right )  ^ {1/2} ,\  t \in [ 0 , T ] .
 +
$$
  
 
Suppose one is given an equation
 
Suppose one is given an equation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490144.png" /></td> <td valign="top" style="width:5%;text-align:right;">(10)</td></tr></table>
+
$$ \tag{10 }
 +
 
 +
\frac{dx}{dt}
 +
  = A ( t , \epsilon ) x
 +
$$
  
with a matrix coefficient that holomorphically depends on a complex  "small"  parameter <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490145.png" />:
+
with a matrix coefficient that holomorphically depends on a complex  "small"  parameter $  \epsilon $:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490146.png" /></td> <td valign="top" style="width:5%;text-align:right;">(11)</td></tr></table>
+
$$ \tag{11 }
 +
A ( t , \epsilon )  = A _ {0} ( t) + \epsilon A _ {1} ( t) + \epsilon
 +
^ {2} A _ {2} ( t) + \dots .
 +
$$
  
Suppose that for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490147.png" /> the series
+
Suppose that for $  | \epsilon | < \epsilon _ {0} $
 +
the series
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490148.png" /></td> </tr></table>
+
$$
 +
\| A _ {0} ( \cdot ) \| + \epsilon \| A _ {1} ( \cdot ) \| + \epsilon
 +
^ {2} \| A _ {2} ( \cdot ) \| + \dots
 +
$$
  
 
converges, where
 
converges, where
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490149.png" /></td> </tr></table>
+
$$
 +
\| A _ {j} ( \cdot ) \|  = \int\limits _ { 0 } ^ { T }  | A _ {j} ( t) |  dt ,
 +
$$
  
which guarantees the (componentwise) convergence of the series (11) for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490150.png" /> in the space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490151.png" />. Then the transition matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490152.png" /> of (10) for fixed <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490153.png" /> is an analytic function of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490154.png" /> for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490155.png" />. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490156.png" /> be a constant matrix with eigen values <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490157.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490158.png" />. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490159.png" /> be the multipliers of equation (10), <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490160.png" />. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490161.png" /> is a multiplier of multiplicity <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490162.png" />, then
+
which guarantees the (componentwise) convergence of the series (11) for $  | \epsilon | < \epsilon _ {0} $
 +
in the space $  L ( 0 , T ) $.  
 +
Then the transition matrix $  X ( t , \epsilon ) $
 +
of (10) for fixed $  t \in [ 0 , T ] $
 +
is an analytic function of $  \epsilon $
 +
for $  | \epsilon | < \epsilon _ {0} $.  
 +
Let $  A _ {0} ( t) = C $
 +
be a constant matrix with eigen values $  \lambda _ {j} $,  
 +
$  j = 1 \dots n $.  
 +
Let $  \rho _ {j} ( \epsilon ) $
 +
be the multipliers of equation (10), $  \rho _ {j} ( 0) = \mathop{\rm exp} ( \lambda _ {j} T ) $.  
 +
If $  \rho _ {h _ {1}  } ( 0) = \dots = \rho _ {h _ {r}  } ( 0) = \mathop{\rm exp} ( \alpha  ^ {(0)} T) $
 +
is a multiplier of multiplicity $  r $,  
 +
then
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490163.png" /></td> <td valign="top" style="width:5%;text-align:right;">(12)</td></tr></table>
+
$$ \tag{12 }
 +
\lambda _ {h}  = \alpha  ^ {(0)} +
 +
\frac{2 \pi i }{T}
 +
m _ {h} ,\  h =
 +
h _ {1} \dots h _ {r} ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490164.png" /> are integers. If simple elementary divisors of the monodromy matrix correspond to this multiplier, or, in other words, if to each <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490165.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490166.png" />, correspond simple elementary divisors of the matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490167.png" /> (for example, if all the numbers <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490168.png" /> are distinct), then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490169.png" /> is called an <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490171.png" />-fold characteristic exponent (of equation (10) with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490172.png" />) of simple type. It turns out that the corresponding <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490174.png" /> characteristic exponents of (10) with small <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490175.png" /> can be very easily computed to a first approximation. Namely, let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490176.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490177.png" /> be the corresponding normalized eigen vectors of the matrices <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490178.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490179.png" />;
+
where $  m _ {h} $
 +
are integers. If simple elementary divisors of the monodromy matrix correspond to this multiplier, or, in other words, if to each $  \lambda _ {h} $,  
 +
$  h = h _ {1} \dots h _ {r} $,  
 +
correspond simple elementary divisors of the matrix $  C $(
 +
for example, if all the numbers $  \lambda _ {h} $
 +
are distinct), then $  \alpha  ^ {(0)} $
 +
is called an $  r $-
 +
fold characteristic exponent (of equation (10) with $  \epsilon = 0 $)  
 +
of simple type. It turns out that the corresponding $  r $
 +
characteristic exponents of (10) with small $  \epsilon > 0 $
 +
can be very easily computed to a first approximation. Namely, let $  a _ {h} $
 +
and $  b _ {h} $
 +
be the corresponding normalized eigen vectors of the matrices $  C $
 +
and $  C  ^ {*} $;
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490180.png" /></td> </tr></table>
+
$$
 +
C a _ {h}  = \lambda _ {h} a _ {h} ,\  C  ^ {*} b _ {h}  = \overline \lambda \; _ {h} b _ {h} ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490181.png" /></td> </tr></table>
+
$$
 +
( a _ {j} , b _ {h} )  = \delta _ {jh} ,\  j , h = h _ {1} \dots h _ {r} ;
 +
$$
  
 
let
 
let
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490182.png" /></td> </tr></table>
+
$$
 +
A _ {1} ( t)  \sim  \sum _ {m = - \infty } ^ { + \infty } A _ {1}  ^ {( m)}
 +
\mathop{\rm exp} \left (
 +
\frac{2 \pi i mt }{T}
 +
\right )
 +
$$
 +
 
 +
be the Fourier series of  $  A _ {1} ( t) $,
 +
and let
 +
 
 +
$$
 +
\sigma _ {jh}  = ( A _ {1} ^ {( m _ {h} - m _ {j} ) } a _ {j} , b _ {h} ) ,\  j , h = h _ {1} \dots h _ {r} ,
 +
$$
  
be the Fourier series of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490183.png" />, and let
+
where  $  m _ {j} $
 +
are the numbers from (12). Then for the corresponding  $  r $
 +
characteristic exponents  $  \alpha _ {h} ( \epsilon ) $,
 +
$  h = h _ {1} \dots h _ {r} $,
 +
of (10), which become  $  \alpha  ^ {(0)} $
 +
for  $  \epsilon = 0 $,
 +
one has series expansions in fractional powers of  $  \epsilon $,
 +
starting with terms of the first order:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490184.png" /></td> </tr></table>
+
$$ \tag{13 }
 +
\alpha _ {h} ( \epsilon )  = \alpha  ^ {(0)} + \beta _ {h} \epsilon + O
 +
( \epsilon ^ {1 + 1 / q _ {h} } ) ,\  h = h _ {1} \dots h _ {r} .
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490185.png" /> are the numbers from (12). Then for the corresponding <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490186.png" /> characteristic exponents <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490187.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490188.png" />, of (10), which become <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490189.png" /> for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490190.png" />, one has series expansions in fractional powers of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490191.png" />, starting with terms of the first order:
+
Here the  $  \beta _ {h} $
 +
are the roots (written as many times as their multiplicity) of the equation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490192.png" /></td> <td valign="top" style="width:5%;text-align:right;">(13)</td></tr></table>
+
$$
 +
\mathop{\rm det}  \| \sigma _ {jh} - \beta \delta _ {jh} \|  = 0
 +
$$
  
Here the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490193.png" /> are the roots (written as many times as their multiplicity) of the equation
+
and  $  q _ {h} $
 +
are natural numbers equal to the multiplicities of the corresponding  $  \beta _ {h} $(
 +
$  \delta _ {jj} = 1 $,
 +
$  \delta _ {jh} = 0 $
 +
for  $  j \neq h $).
 +
If the root  $  \beta _ {h} $
 +
is simple, then  $  q _ {h} = 1 $
 +
and the corresponding function  $  \alpha _ {h} ( \epsilon ) $
 +
is analytic for  $  \epsilon = 0 $.
 +
From (13) it follows that cases are possible in which the  "unperturbed" (that is, with  $  \epsilon = 0 $)
 +
system is stable (all the  $  \lambda _ {j} $
 +
are purely imaginary and simple elementary divisors correspond to them), but the  "perturbed" system (small  $  \epsilon \neq 0 $)
 +
is unstable ( $  \mathop{\rm Re}  \beta _ {h} > 0 $
 +
for at least one  $  \beta _ {h} $).  
 +
This phenomenon of stability loss for an arbitrary small periodic change of parameters (with time) is called parametric resonance. Similar but more complicated formulas hold for characteristic exponents of non-simple type.
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490194.png" /></td> </tr></table>
+
Let  $  \rho  ^ {(1)} \dots \rho  ^ {(q)} $
 +
be the distinct multipliers of equation (3) and let  $  n _ {1} \dots n _ {q} $
 +
be their multiplicities, where  $  n _ {1} + \dots + n _ {q} = n $.
 +
Suppose that the points  $  \rho  ^ {(j)} $
 +
on the complex  $  \zeta $-
 +
plane are surrounded by non-intersecting discs  $  | \zeta - \rho  ^ {(j)} | \leq  R _ {j} $
 +
and that a cut, not intersecting these discs, is drawn from the point  $  \zeta = 0 $
 +
to the point  $  \zeta = \infty $.  
 +
Suppose that with each multiplier  $  \rho  ^ {(j)} $
 +
is associated an arbitrary integer  $  m _ {j} $
 +
and that  $  U = X ( T , \epsilon ) $
 +
is the transition matrix of (10). The branches of the logarithm  $  (  \mathop{\rm ln}  \zeta ) _ {m} $
 +
are determined by means of the cut. The matrix  $  \mathop{\rm ln}  U $(
 +
"matrix logarithmmatrix logarithm" ) can be defined by the formula
  
and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490195.png" /> are natural numbers equal to the multiplicities of the corresponding <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490196.png" /> (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490197.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490198.png" /> for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490199.png" />). If the root <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490200.png" /> is simple, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490201.png" /> and the corresponding function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490202.png" /> is analytic for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490203.png" />. From (13) it follows that cases are possible in which the "unperturbed" (that is, with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490204.png" />) system is stable (all the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490205.png" /> are purely imaginary and simple elementary divisors correspond to them), but the "perturbed" system (small <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490206.png" />) is unstable (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490207.png" /> for at least one <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490208.png" />). This phenomenon of stability loss for an arbitrary small periodic change of parameters (with time) is called parametric resonance. Similar but more complicated formulas hold for characteristic exponents of non-simple type.
+
$$ \tag{14 }
 +
\mathop{\rm ln}  U  =
 +
\frac{1}{2 \pi i }
 +
\sum_{j=1}^ { q }  \int\limits _ {\Gamma _ {j} } ( \zeta I - U ) ^ {-1} \mathop{\rm ln} \zeta ) _ {m _ {j} } d
 +
\zeta ,
 +
$$
  
Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490209.png" /> be the distinct multipliers of equation (3) and let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490210.png" /> be their multiplicities, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490211.png" />. Suppose that the points <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490212.png" /> on the complex <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490213.png" />-plane are surrounded by non-intersecting discs <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490214.png" /> and that a cut, not intersecting these discs, is drawn from the point <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490215.png" /> to the point <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490216.png" />. Suppose that with each multiplier <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490217.png" /> is associated an arbitrary integer <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490218.png" /> and that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490219.png" /> is the transition matrix of (10). The branches of the logarithm <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490220.png" /> are determined by means of the cut. The matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490221.png" /> ( "matrix logarithmmatrix logarithm" ) can be defined by the formula
+
where  $  \Gamma _ {j} $
 +
is the circle  $  | \zeta - \rho  ^ {(j)} | = R _ {j} $.  
 +
The set of numbers  $  m _ {1} \dots m _ {q} $
 +
determines a branch of the matrix logarithm. Also, $  \mathop{\rm exp} (  \mathop{\rm ln}  U ) = U $
 +
for small  $  \epsilon $.  
 +
Generally speaking, formula (14) for all possible  $  m _ {1} \dots m _ {q} $
 +
does not cover all the values of the matrix logarithm, that is, all solutions  $  Z $
 +
of the equation  $  \mathop{\rm exp}  Z = U $.  
 +
However, the solution given by (14) has the important property of holomorphy: The entries of the matrix  $  \mathop{\rm ln}  U $
 +
in (14) are holomorphic functions of the entries of  $  U $.  
 +
For equation (10), formula (5) takes the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490222.png" /></td> <td valign="top" style="width:5%;text-align:right;">(14)</td></tr></table>
+
$$ \tag{15 }
 +
X ( t , \epsilon )  = F ( t , \epsilon )  \mathop{\rm exp}  [ tK (
 +
\epsilon ) ] ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490223.png" /> is the circle <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490224.png" />. The set of numbers <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490225.png" /> determines a branch of the matrix logarithm. Also, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490226.png" /> for small <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490227.png" />. Generally speaking, formula (14) for all possible <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490228.png" /> does not cover all the values of the matrix logarithm, that is, all solutions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490229.png" /> of the equation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490230.png" />. However, the solution given by (14) has the important property of holomorphy: The entries of the matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490231.png" /> in (14) are holomorphic functions of the entries of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490232.png" />. For equation (10), formula (5) takes the form
+
where $  F ( t+ T , \epsilon ) = F ( t , \epsilon ) $,  
 +
$  K ( \epsilon ) = T  ^ {-1}  \mathop{\rm ln}  X ( T , \epsilon ) $.  
 +
If  $  \mathop{\rm ln}  X ( T , \epsilon ) $
 +
is determined in accordance with (14), then
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490233.png" /></td> <td valign="top" style="width:5%;text-align:right;">(15)</td></tr></table>
+
$$ \tag{16 }
 +
\left .
 +
\begin{array}{c}
 +
K ( \epsilon )  = K _ {0} + \epsilon K _ {1} + \dots ,  \\
 +
F ( t , \epsilon )  = F _ {0} ( t) + \epsilon F _ {1} ( t) + \dots  \\
 +
\end{array}
 +
\right \}
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490234.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490235.png" />. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490236.png" /> is determined in accordance with (14), then
+
are series that converge for small  $  | \epsilon | $.  
 +
The main information about the behaviour of the solutions as  $  t \rightarrow + \infty $
 +
which is usually of interest in applications is contained in the indicator matrix  $  K ( \epsilon ) $.  
 +
Below a method for the asymptotic integration of (10) is given, that is, a method for successively determining the coefficients  $  K _ {j} $
 +
and  $  F _ {j} ( t) $
 +
in (16).
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490237.png" /></td> <td valign="top" style="width:5%;text-align:right;">(16)</td></tr></table>
+
Suppose that  $  A _ {0} ( t) \equiv C $
 +
in (11). Although  $  X ( t , 0 ) = \mathop{\rm exp} ( tC ) $,
 +
generally speaking there is no branch of the matrix logarithm such that the matrix  $  K ( \epsilon ) $
 +
is analytic for  $  \epsilon = 0 $
 +
and  $  K ( 0) = C $.  
 +
This branch of the logarithm will exist in the so-called non-resonance case, when among the eigen values  $  \lambda _ {j} $
 +
of  $  C $
 +
there are no numbers for which
  
are series that converge for small <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490238.png" />. The main information about the behaviour of the solutions as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490239.png" /> which is usually of interest in applications is contained in the indicator matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490240.png" />. Below a method for the asymptotic integration of (10) is given, that is, a method for successively determining the coefficients <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490241.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490242.png" /> in (16).
+
$$
 +
\lambda _ {j} - \lambda _ {h}  =
 +
\frac{2 \pi m i }{T}
 +
  \neq  0
 +
$$
  
Suppose that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490243.png" /> in (11). Although <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490244.png" />, generally speaking there is no branch of the matrix logarithm such that the matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490245.png" /> is analytic for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490246.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490247.png" />. This branch of the logarithm will exist in the so-called non-resonance case, when among the eigen values <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490248.png" /> of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490249.png" /> there are no numbers for which
+
( $  m $
 +
is an integer). In the resonance case (when such eigen values exist) equation (10) reduces by a suitable change of variable  $  x = P ( t) y $,
 +
where  $  P ( t+ T ) = P ( t) $,
 +
to an analogous equation for which the non-resonance case holds. The matrix  $  P ( t) $
 +
can be determined from the matrix  $  C $.
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490250.png" /></td> </tr></table>
+
In (16), in the non-resonance case  $  K _ {0} = C $,
 +
$  F _ {0} ( t) \equiv I $,
 +
and the matrices  $  F _ {j} ( t) $,
 +
$  K _ {j} $,
 +
$  j = 1 , 2 \dots $
 +
are found from the equation
  
(<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490251.png" /> is an integer). In the resonance case (when such eigen values exist) equation (10) reduces by a suitable change of variable <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490252.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490253.png" />, to an analogous equation for which the non-resonance case holds. The matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490254.png" /> can be determined from the matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490255.png" />.
+
$$
  
In (16), in the non-resonance case <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490256.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490257.png" />, and the matrices <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490258.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490259.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490260.png" /> are found from the equation
+
\frac{dF}{dt}
 +
  =  [ C + \epsilon A _ {1} ( t) + \epsilon  ^ {2} A _ {2} ( t)
 +
+ \dots ] F ( t , \epsilon ) - F ( t , \epsilon ) K ( \epsilon ) ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490261.png" /></td> </tr></table>
+
after equating coefficients at the same powers of  $  \epsilon $
 +
in this equation. To determine  $  Z ( t) = F _ {j} ( t) $
 +
and  $  L = K _ {j} $
 +
one obtains a matrix equation of the form
  
after equating coefficients at the same powers of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490262.png" /> in this equation. To determine <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490263.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490264.png" /> one obtains a matrix equation of the form
+
$$ \tag{17 }
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490265.png" /></td> <td valign="top" style="width:5%;text-align:right;">(17)</td></tr></table>
+
\frac{dZ}{dt}
 +
  = CZ - ZC + \Phi ( t) - L ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490266.png" />. The matrices <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490267.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490268.png" /> are found, and moreover uniquely (the non-resonance case), from (17) and the periodicity condition <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059490/l059490269.png" />.
+
where $  \Phi ( t+ T ) = \Phi ( t) $.  
 +
The matrices $  Z ( t) $
 +
and $  L $
 +
are found, and moreover uniquely (the non-resonance case), from (17) and the periodicity condition $  Z ( t+ T ) = Z ( t) $.
  
 
For special cases of the system (1) see [[Hamiltonian system, linear|Hamiltonian system, linear]] and [[Hill equation|Hill equation]].
 
For special cases of the system (1) see [[Hamiltonian system, linear|Hamiltonian system, linear]] and [[Hill equation|Hill equation]].
Line 191: Line 572:
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  I.Z. Shtokalo,  "Linear differential equations with variable coefficients: criteria of stability and unstability of their solutions" , Hindushtan Publ. Comp.  (1961)  (Translated from Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  N.P. Erugin,  "Linear systems of ordinary differential equations with periodic and quasi-periodic coefficients" , Acad. Press  (1966)  (Translated from Russian)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  V.A. Yakubovich,  V.M. Starzhinskii,  "Linear differential equations with periodic coefficients" , Wiley  (1975)  (Translated from Russian)</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  I.Z. Shtokalo,  "Linear differential equations with variable coefficients: criteria of stability and unstability of their solutions" , Hindushtan Publ. Comp.  (1961)  (Translated from Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  N.P. Erugin,  "Linear systems of ordinary differential equations with periodic and quasi-periodic coefficients" , Acad. Press  (1966)  (Translated from Russian)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  V.A. Yakubovich,  V.M. Starzhinskii,  "Linear differential equations with periodic coefficients" , Wiley  (1975)  (Translated from Russian)</TD></TR></table>
 
 
  
 
====Comments====
 
====Comments====
 
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  R.W. Brockett,  "Finite dimensional linear systems" , Wiley  (1970)</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  J.K. Hale,  "Ordinary differential equations" , Wiley  (1969)</TD></TR></table>
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  R.W. Brockett,  "Finite dimensional linear systems" , Wiley  (1970)</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  J.K. Hale,  "Ordinary differential equations" , Wiley  (1969)</TD></TR></table>

Latest revision as of 08:59, 21 January 2024


A system of $ n $ linear ordinary differential equations of the form

$$ \tag{1 } \left . \begin{array}{c} \frac{d x _ {1} }{d t } = \ \alpha _ {11} ( t) x _ {1} + \dots + \alpha _ {1n} ( t) x _ {n} , \\ {\dots \dots \dots \dots \dots } \\ \frac{d x _ {n} }{dt} = \alpha _ {n1} ( t) x _ {1} + \dots + \alpha _ {nn} ( t) x _ {n} , \\ \end{array} \right \} $$

where $ t $ is a real variable, $ \alpha _ {jh} ( t) $ and $ x _ {h} = x _ {h} ( t) $ are complex-valued functions, and

$$ \tag{2 } \alpha _ {jh} ( t + T ) = \alpha _ {jh} ( t) \ \ \textrm{ for any } j , h . $$

The number $ T > 0 $ is called the period of the coefficients of the system (1). It is convenient to write (1) as one vector equation

$$ \tag{3 } \frac{dx}{dt} = A ( t) x , $$

where

$$ x ^ {T} = ( x _ {1} \dots x _ {n} ) ,\ \ A ( t) = \| \alpha _ {jh} ( t) \| ,\ \ j , h = 1 \dots n . $$

It is assumed that the functions $ \alpha _ {jh} ( t) $ are defined for $ t \in \mathbf R $ and are measurable and Lebesgue integrable on $ [ 0 , T ] $, and that the equalities (2) are satisfied almost-everywhere, that is, $ A ( t + T ) = A ( t) $. A solution of (3) is a vector function $ x = x ( t) $ with absolutely-continuous components such that (3) is satisfied almost-everywhere. Suppose that $ t _ {0} \in \mathbf R $ and $ a $ are an (arbitrarily) given number and vector. A solution $ x ( t) $ satisfying the condition $ x ( t _ {0} ) = a $ exists and is uniquely determined. A matrix $ X ( t) $ of order $ n $ with absolutely-continuous entries is called the matrizant (or evolution matrix, or transition matrix, or Cauchy matrix) of (3) if almost-everywhere on $ \mathbf R $ one has

$$ \frac{dX}{dt} = A ( t) X $$

and $ X ( 0) = I $, where $ I $ is the unit $ n \times n $ matrix. The transition matrix $ X ( t) $ satisfies the relation

$$ X ( t + T ) = X ( t) X ( T) ,\ \ t \in \mathbf R . $$

The matrix $ X ( T) $ is called the monodromy matrix, and its eigen values $ \rho _ {j} $ are called the multipliers of (3). The equation

$$ \tag{4 } \mathop{\rm det} [ X ( T) - \rho I ] = 0 $$

for the multipliers $ \rho _ {j} $ is called the characteristic equation of equation (3) (or of the system (1)). To every eigen vector $ a ^ {(0)} $ of the monodromy matrix with multiplier $ \rho _ {0} $ corresponds a solution $ x ^ {(0)} ( t) = X ( t) a ^ {(0)} $ of (3) satisfying the condition

$$ x ^ {(0)} ( t + T ) = \rho _ {0} x ^ {(0)} ( t) . $$

The Floquet–Lyapunov theorem holds: The transition matrix of (3) with $ T $- periodic matrix $ A ( t) $ can be represented in the form

$$ \tag{5 } X ( t) = F ( t) e ^ {tK} , $$

where $ K $ is a constant matrix and $ F ( t) $ is an absolutely-continuous matrix function, periodic with period $ T $, non-singular for all $ t \in \mathbf R $, and such that $ F ( 0) = I $. Conversely, if $ F ( t) $ and $ K $ are matrices with the given properties, then the matrix (5) is the transition matrix of an equation (3) with $ T $- periodic matrix $ A ( t) $. The matrix $ K $, called the indicator matrix, and the matrix function $ F ( t) $ in the representation (5) are not uniquely determined. In the case of real coefficients $ \alpha _ {jh} ( t) $ in (5), $ X ( t) $ is a real matrix, but $ F ( t) $ and $ K $ are complex matrices, generally speaking. For this case there is a refinement of the Floquet–Lyapunov theorem: The transition matrix of (3) with $ T $- periodic real matrix $ A ( t) $ can be represented in the form (5), where $ K $ is a constant real matrix and $ F ( t) $ is a real absolutely-continuous matrix function, non-singular for all $ t $, satisfying the relations

$$ F ( t + T ) = F ( t) L ,\ \ F ( 0) = I ,\ K L = L K , $$

where $ L $ is a real matrix such that

$$ L ^ {2} = I . $$

In particular, $ F ( t + 2 T ) = F ( t) $. Conversely, if $ F ( t) $, $ K $ and $ L $ are arbitrary matrices with the given properties, then (5) is the transition matrix of an equation (3) with a $ T $- periodic real matrix $ A ( t) $.

An immediate consequence of (5) is Floquet's theorem, which asserts that equation (3) has a fundamental system of solutions splitting into subsets, each of which has the form

$$ x ^ {(1)} ( t) = e ^ {\lambda t } u _ {1} ( t) , $$

$$ x ^ {(2)} ( t) = e ^ {\lambda t } [ t u _ {1} ( t) + u _ {2} ( t) ] , $$

$$ {\dots \dots \dots \dots \dots } $$

$$ x ^ {(m)} ( t) = e ^ {\lambda t } \left [ \frac{t ^ {m-1} }{( m - 1 ) ! } u _ {1} ( t) + \dots + t u _ {m-1} ( t) + u _ {m} ( t) \right ] , $$

where the $ u _ {j} ( t) $ are absolutely-continuous $ T $- periodic (generally speaking, complex-valued) vector functions. (The given subset of solutions corresponds to one $ ( m \times m ) $- cell of the Jordan form of $ K $.) If all elementary divisors of $ K $ are simple (in particular, if all roots of the characteristic equation (4) are simple), then there is a fundamental system of solutions of the form

$$ x ^ {(j)} ( t) = e ^ {\lambda _ {j} t } u _ {j} ( t) ,\ \ u _ {j} ( t + T ) = u _ {j} ( t) ,\ \ j = 1 \dots n . $$

Formula (5) implies that (3) is reducible (see Reducible linear system) to the equation

$$ \frac{dy}{dt} = K y $$

by means of the change of variable $ x = F ( t) y $( Lyapunov's theorem).

Let $ \rho _ {1} \dots \rho _ {n} $ be the multipliers of equation (3) and let $ K $ be an arbitrary indicator matrix, that is,

$$ \tag{6 } e ^ {TK} = X ( T) . $$

The eigen values $ \lambda _ {1} \dots \lambda _ {n} $ of $ K $ are called the characteristic exponents (cf. Characteristic exponent) of (3). From (6) one obtains $ e ^ {T \lambda _ {j} } = \rho _ {j} $, $ j = 1 \dots n $. The characteristic exponent $ \lambda $ can be defined as the complex number for which (3) has a solution that is representable in the form

$$ x ( t) = e ^ {\lambda t } u ( t) , $$

where $ u ( t) $ is a $ T $-

periodic vector-valued function. The main properties of the solutions in which one is usually interested in applications are determined by the characteristic exponents or multipliers of the given equation (see the Table).

<tbody> </tbody>
Property of the solutions Characteristic exponents Multipliers
Stability of the trivial solution (boundedness on $ ( 0 , \infty ) $ of all solutions) Real parts non-positive; if zero or purely imaginary characteristic exponents are present, then simple elementary divisors of the indicator matrix correspond to them Situated inside or on the unit circle; in the latter case simple elementary divisors of the monodromy matrix correspond to them
Asymptotic stability of the trivial solution ( $ | x ( t) | \rightarrow 0 $

as $ t \rightarrow \infty $

for any solution)
Real parts negative Situated inside the unit circle
Boundedness of all solutions on $ ( - \infty , + \infty ) $ Purely imaginary with simple elementary divisors of the indicator matrix Situated on the unit circle; simple elementary divisors of the monodromy matrix correspond to them
Instability of the trivial solution (existence of solutions unbounded on $ ( 0 , \infty ) $) There is either a characteristic exponent with positive real part or a purely imaginary one (in particular, zero) with non-simple elementary divisor of the indicator matrix There is a multiplier either outside the unit circle or one on the unit circle with a non-simple elementary divisor of the monodromy matrix
Existence of a $ T $- periodic solution For some characteristic exponent $ \lambda _ {j} $

one has $ \lambda _ {j} T = 2 \pi i m $( $ m $

is an integer)
One of the multipliers is equal to one
Existence of a semi-periodic solution, i.e. a solution $ x ( t) $

for which $ x ( t + T ) = - x ( t) $ for all $ t $

For some characteristic exponent $ \lambda _ {j} $

one has $ \lambda _ {j} T = ( 2 m + 1 ) \pi i $( $ m $

is an integer)
There is a multiplier $ \rho = - 1 $

In applications, the coefficients of (1) often depend on parameters; in the parameter space one must distinguish the domains at whose points the solutions of (1) have desired properties (usually these are the first four properties mentioned in the Table, or the fact that $ | x ( t) | \leq \textrm{ const } e ^ {- \alpha t } $ with $ \alpha $ given). These problems thus reduce to the calculation or estimation of the characteristic exponents (multipliers) of (1).

The equation

$$ \tag{7 } \frac{dx}{dt} = A ( t) x + f ( t) , $$

where $ A ( t) $ and $ f ( t) $ are a measurable $ T $- periodic matrix function and vector function, respectively, that are Lebesgue integrable on $ [ 0 , T ] $( $ A ( t + T ) = A ( t) $, $ f ( t + T ) = f ( t) $ almost-everywhere), is called an "inhomogeneous linear ordinary differential equation with periodic coefficientsinhomogeneous linear ordinary differential equation with periodic coefficients" . If the corresponding homogeneous equation

$$ \tag{8 } \frac{dy}{dt} = A ( t) y $$

does not have $ T $- periodic solutions, then (7) has a unique $ T $- periodic solution. It can be determined by the formula

$$ x ( t) = [ I - R ( t , 0 ) ] ^ {-1} \int\limits _ { 0 } ^ { T } R ( t , T - \tau ) f ( t - \tau ) d \tau , $$

where $ R ( t , s ) = Y ( t+ T ) Y ( t + T ) ^ {-1} $ and $ Y ( t) $ is the transition matrix of the homogeneous equation (8), where $ R ( t+ T , s ) = R ( t , s ) $, $ \mathop{\rm det} [ I - R ( t , 0 ) ] \neq 0 $.

Suppose that (8) has $ d \geq 1 $ linearly independent $ T $- periodic solutions $ y _ {1} ( t) \dots y _ {d} ( t) $. Then the adjoint equation

$$ \frac{dz}{dt} = - A ( t) ^ {*} z $$

also has $ d $ linearly independent $ T $- periodic solutions, $ z _ {1} ( t) \dots z _ {d} ( t) $. The inhomogeneous equation (7) has a $ T $- periodic solution if and only if that the orthogonality relations

$$ \tag{9 } \int\limits _ { 0 } ^ { T } ( f ( t) , z _ {j} ( t) ) dt = 0 ,\ j = 1 \dots d , $$

hold. If so, an arbitrary $ T $- periodic solution of (7) has the form

$$ x ( t) = x ^ {( 0 ) } ( t) + \gamma _ {1} y _ {1} ( t) + \dots + \gamma _ {d} y _ {d} ( t) , $$

where $ \gamma _ {1} \dots \gamma _ {d} $ are arbitrary numbers and $ x ^ {(0)} ( t) $ is a $ T $- periodic solution of (7). Under the additional conditions

$$ \int\limits _ { 0 } ^ { T } ( x ( t) , y _ {j} ( t) ) dt = 0 ,\ j = 1 \dots d , $$

the $ T $- periodic solution $ x ( t) $ is determined uniquely; moreover, there is a constant $ \theta > 0 $, independent of $ f ( t) $, such that

$$ | x ( t) | \leq \theta \left ( \int\limits _ { 0 } ^ { T } | f ( s) | ^ {2} ds \right ) ^ {1/2} ,\ t \in [ 0 , T ] . $$

Suppose one is given an equation

$$ \tag{10 } \frac{dx}{dt} = A ( t , \epsilon ) x $$

with a matrix coefficient that holomorphically depends on a complex "small" parameter $ \epsilon $:

$$ \tag{11 } A ( t , \epsilon ) = A _ {0} ( t) + \epsilon A _ {1} ( t) + \epsilon ^ {2} A _ {2} ( t) + \dots . $$

Suppose that for $ | \epsilon | < \epsilon _ {0} $ the series

$$ \| A _ {0} ( \cdot ) \| + \epsilon \| A _ {1} ( \cdot ) \| + \epsilon ^ {2} \| A _ {2} ( \cdot ) \| + \dots $$

converges, where

$$ \| A _ {j} ( \cdot ) \| = \int\limits _ { 0 } ^ { T } | A _ {j} ( t) | dt , $$

which guarantees the (componentwise) convergence of the series (11) for $ | \epsilon | < \epsilon _ {0} $ in the space $ L ( 0 , T ) $. Then the transition matrix $ X ( t , \epsilon ) $ of (10) for fixed $ t \in [ 0 , T ] $ is an analytic function of $ \epsilon $ for $ | \epsilon | < \epsilon _ {0} $. Let $ A _ {0} ( t) = C $ be a constant matrix with eigen values $ \lambda _ {j} $, $ j = 1 \dots n $. Let $ \rho _ {j} ( \epsilon ) $ be the multipliers of equation (10), $ \rho _ {j} ( 0) = \mathop{\rm exp} ( \lambda _ {j} T ) $. If $ \rho _ {h _ {1} } ( 0) = \dots = \rho _ {h _ {r} } ( 0) = \mathop{\rm exp} ( \alpha ^ {(0)} T) $ is a multiplier of multiplicity $ r $, then

$$ \tag{12 } \lambda _ {h} = \alpha ^ {(0)} + \frac{2 \pi i }{T} m _ {h} ,\ h = h _ {1} \dots h _ {r} , $$

where $ m _ {h} $ are integers. If simple elementary divisors of the monodromy matrix correspond to this multiplier, or, in other words, if to each $ \lambda _ {h} $, $ h = h _ {1} \dots h _ {r} $, correspond simple elementary divisors of the matrix $ C $( for example, if all the numbers $ \lambda _ {h} $ are distinct), then $ \alpha ^ {(0)} $ is called an $ r $- fold characteristic exponent (of equation (10) with $ \epsilon = 0 $) of simple type. It turns out that the corresponding $ r $ characteristic exponents of (10) with small $ \epsilon > 0 $ can be very easily computed to a first approximation. Namely, let $ a _ {h} $ and $ b _ {h} $ be the corresponding normalized eigen vectors of the matrices $ C $ and $ C ^ {*} $;

$$ C a _ {h} = \lambda _ {h} a _ {h} ,\ C ^ {*} b _ {h} = \overline \lambda \; _ {h} b _ {h} , $$

$$ ( a _ {j} , b _ {h} ) = \delta _ {jh} ,\ j , h = h _ {1} \dots h _ {r} ; $$

let

$$ A _ {1} ( t) \sim \sum _ {m = - \infty } ^ { + \infty } A _ {1} ^ {( m)} \mathop{\rm exp} \left ( \frac{2 \pi i mt }{T} \right ) $$

be the Fourier series of $ A _ {1} ( t) $, and let

$$ \sigma _ {jh} = ( A _ {1} ^ {( m _ {h} - m _ {j} ) } a _ {j} , b _ {h} ) ,\ j , h = h _ {1} \dots h _ {r} , $$

where $ m _ {j} $ are the numbers from (12). Then for the corresponding $ r $ characteristic exponents $ \alpha _ {h} ( \epsilon ) $, $ h = h _ {1} \dots h _ {r} $, of (10), which become $ \alpha ^ {(0)} $ for $ \epsilon = 0 $, one has series expansions in fractional powers of $ \epsilon $, starting with terms of the first order:

$$ \tag{13 } \alpha _ {h} ( \epsilon ) = \alpha ^ {(0)} + \beta _ {h} \epsilon + O ( \epsilon ^ {1 + 1 / q _ {h} } ) ,\ h = h _ {1} \dots h _ {r} . $$

Here the $ \beta _ {h} $ are the roots (written as many times as their multiplicity) of the equation

$$ \mathop{\rm det} \| \sigma _ {jh} - \beta \delta _ {jh} \| = 0 $$

and $ q _ {h} $ are natural numbers equal to the multiplicities of the corresponding $ \beta _ {h} $( $ \delta _ {jj} = 1 $, $ \delta _ {jh} = 0 $ for $ j \neq h $). If the root $ \beta _ {h} $ is simple, then $ q _ {h} = 1 $ and the corresponding function $ \alpha _ {h} ( \epsilon ) $ is analytic for $ \epsilon = 0 $. From (13) it follows that cases are possible in which the "unperturbed" (that is, with $ \epsilon = 0 $) system is stable (all the $ \lambda _ {j} $ are purely imaginary and simple elementary divisors correspond to them), but the "perturbed" system (small $ \epsilon \neq 0 $) is unstable ( $ \mathop{\rm Re} \beta _ {h} > 0 $ for at least one $ \beta _ {h} $). This phenomenon of stability loss for an arbitrary small periodic change of parameters (with time) is called parametric resonance. Similar but more complicated formulas hold for characteristic exponents of non-simple type.

Let $ \rho ^ {(1)} \dots \rho ^ {(q)} $ be the distinct multipliers of equation (3) and let $ n _ {1} \dots n _ {q} $ be their multiplicities, where $ n _ {1} + \dots + n _ {q} = n $. Suppose that the points $ \rho ^ {(j)} $ on the complex $ \zeta $- plane are surrounded by non-intersecting discs $ | \zeta - \rho ^ {(j)} | \leq R _ {j} $ and that a cut, not intersecting these discs, is drawn from the point $ \zeta = 0 $ to the point $ \zeta = \infty $. Suppose that with each multiplier $ \rho ^ {(j)} $ is associated an arbitrary integer $ m _ {j} $ and that $ U = X ( T , \epsilon ) $ is the transition matrix of (10). The branches of the logarithm $ ( \mathop{\rm ln} \zeta ) _ {m} $ are determined by means of the cut. The matrix $ \mathop{\rm ln} U $( "matrix logarithmmatrix logarithm" ) can be defined by the formula

$$ \tag{14 } \mathop{\rm ln} U = \frac{1}{2 \pi i } \sum_{j=1}^ { q } \int\limits _ {\Gamma _ {j} } ( \zeta I - U ) ^ {-1} ( \mathop{\rm ln} \zeta ) _ {m _ {j} } d \zeta , $$

where $ \Gamma _ {j} $ is the circle $ | \zeta - \rho ^ {(j)} | = R _ {j} $. The set of numbers $ m _ {1} \dots m _ {q} $ determines a branch of the matrix logarithm. Also, $ \mathop{\rm exp} ( \mathop{\rm ln} U ) = U $ for small $ \epsilon $. Generally speaking, formula (14) for all possible $ m _ {1} \dots m _ {q} $ does not cover all the values of the matrix logarithm, that is, all solutions $ Z $ of the equation $ \mathop{\rm exp} Z = U $. However, the solution given by (14) has the important property of holomorphy: The entries of the matrix $ \mathop{\rm ln} U $ in (14) are holomorphic functions of the entries of $ U $. For equation (10), formula (5) takes the form

$$ \tag{15 } X ( t , \epsilon ) = F ( t , \epsilon ) \mathop{\rm exp} [ tK ( \epsilon ) ] , $$

where $ F ( t+ T , \epsilon ) = F ( t , \epsilon ) $, $ K ( \epsilon ) = T ^ {-1} \mathop{\rm ln} X ( T , \epsilon ) $. If $ \mathop{\rm ln} X ( T , \epsilon ) $ is determined in accordance with (14), then

$$ \tag{16 } \left . \begin{array}{c} K ( \epsilon ) = K _ {0} + \epsilon K _ {1} + \dots , \\ F ( t , \epsilon ) = F _ {0} ( t) + \epsilon F _ {1} ( t) + \dots \\ \end{array} \right \} $$

are series that converge for small $ | \epsilon | $. The main information about the behaviour of the solutions as $ t \rightarrow + \infty $ which is usually of interest in applications is contained in the indicator matrix $ K ( \epsilon ) $. Below a method for the asymptotic integration of (10) is given, that is, a method for successively determining the coefficients $ K _ {j} $ and $ F _ {j} ( t) $ in (16).

Suppose that $ A _ {0} ( t) \equiv C $ in (11). Although $ X ( t , 0 ) = \mathop{\rm exp} ( tC ) $, generally speaking there is no branch of the matrix logarithm such that the matrix $ K ( \epsilon ) $ is analytic for $ \epsilon = 0 $ and $ K ( 0) = C $. This branch of the logarithm will exist in the so-called non-resonance case, when among the eigen values $ \lambda _ {j} $ of $ C $ there are no numbers for which

$$ \lambda _ {j} - \lambda _ {h} = \frac{2 \pi m i }{T} \neq 0 $$

( $ m $ is an integer). In the resonance case (when such eigen values exist) equation (10) reduces by a suitable change of variable $ x = P ( t) y $, where $ P ( t+ T ) = P ( t) $, to an analogous equation for which the non-resonance case holds. The matrix $ P ( t) $ can be determined from the matrix $ C $.

In (16), in the non-resonance case $ K _ {0} = C $, $ F _ {0} ( t) \equiv I $, and the matrices $ F _ {j} ( t) $, $ K _ {j} $, $ j = 1 , 2 \dots $ are found from the equation

$$ \frac{dF}{dt} = [ C + \epsilon A _ {1} ( t) + \epsilon ^ {2} A _ {2} ( t) + \dots ] F ( t , \epsilon ) - F ( t , \epsilon ) K ( \epsilon ) , $$

after equating coefficients at the same powers of $ \epsilon $ in this equation. To determine $ Z ( t) = F _ {j} ( t) $ and $ L = K _ {j} $ one obtains a matrix equation of the form

$$ \tag{17 } \frac{dZ}{dt} = CZ - ZC + \Phi ( t) - L , $$

where $ \Phi ( t+ T ) = \Phi ( t) $. The matrices $ Z ( t) $ and $ L $ are found, and moreover uniquely (the non-resonance case), from (17) and the periodicity condition $ Z ( t+ T ) = Z ( t) $.

For special cases of the system (1) see Hamiltonian system, linear and Hill equation.

References

[1] I.Z. Shtokalo, "Linear differential equations with variable coefficients: criteria of stability and unstability of their solutions" , Hindushtan Publ. Comp. (1961) (Translated from Russian)
[2] N.P. Erugin, "Linear systems of ordinary differential equations with periodic and quasi-periodic coefficients" , Acad. Press (1966) (Translated from Russian)
[3] V.A. Yakubovich, V.M. Starzhinskii, "Linear differential equations with periodic coefficients" , Wiley (1975) (Translated from Russian)

Comments

References

[a1] R.W. Brockett, "Finite dimensional linear systems" , Wiley (1970)
[a2] J.K. Hale, "Ordinary differential equations" , Wiley (1969)
How to Cite This Entry:
Linear system of differential equations with periodic coefficients. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Linear_system_of_differential_equations_with_periodic_coefficients&oldid=16408
This article was adapted from an original article by V.A. Yakubovich (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article