Namespaces
Variants
Actions

Difference between revisions of "Hamiltonian system, linear"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (tex encoded by computer)
Line 1: Line 1:
 +
<!--
 +
h0462801.png
 +
$#A+1 = 203 n = 1
 +
$#C+1 = 203 : ~/encyclopedia/old_files/data/H046/H.0406280 Hamiltonian system, linear
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
A system
 
A system
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h0462801.png" /></td> <td valign="top" style="width:5%;text-align:right;">(1)</td></tr></table>
+
$$ \tag{1 }
 +
 
 +
\frac{dp _ {j} }{dt }
 +
  = \
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h0462802.png" /> is a quadratic form in the variables <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h0462803.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h0462804.png" /> with real coefficients which may depend on the time <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h0462805.png" />. A linear Hamiltonian system is also called a linear canonical system. The system (1) may be written as a Hamiltonian vector equation
+
\frac{\partial  {\mathcal H} }{\partial  q _ {j} }
 +
,\ \
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h0462806.png" /></td> <td valign="top" style="width:5%;text-align:right;">(2)</td></tr></table>
+
\frac{dq _ {j} }{dt }
 +
  = \
 +
-  
 +
\frac{\partial  {\mathcal H} }{\partial  p _ {j} }
 +
,\ \
 +
j = 1 \dots k,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h0462807.png" /> is the column vector <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h0462808.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h0462809.png" /> is the matrix of the quadratic form <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628010.png" /> and
+
where $  {\mathcal H} $
 +
is a quadratic form in the variables  $  p _ {1} \dots p _ {k} $,  
 +
$  q _ {1} \dots q _ {k} $
 +
with real coefficients which may depend on the time  $  t $.  
 +
A linear Hamiltonian system is also called a linear canonical system. The system (1) may be written as a Hamiltonian vector equation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628011.png" /></td> </tr></table>
+
$$ \tag{2 }
 +
J
 +
\frac{dx }{dt }
 +
  = H ( t) x,
 +
$$
  
(here <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628012.png" /> is the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628013.png" /> identity matrix). Equation (2) with an arbitrary non-singular real skew-symmetric matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628014.png" /> may be reduced, by a suitable substitution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628015.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628016.png" /> is a non-singular real matrix, to a similar form:
+
where  $  x $
 +
is the column vector  $  ( p _ {1} \dots p _ {k} , q _ {1} \dots q _ {k} ) $,
 +
$  H( t) = H( t)  ^ {*} $
 +
is the matrix of the quadratic form $  2 {\mathcal H} $
 +
and
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628017.png" /></td> </tr></table>
+
$$
 +
= \left \|
 +
\begin{array}{rc}
 +
0  &I _ {k}  \\
 +
- I _ {k}  & 0 \\
 +
\end{array}
 +
\
 +
\right \|
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628018.png" /> is any given real non-singular skew-symmetric matrix. It will be assumed that in (2) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628019.png" />, for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628020.png" />. The following equations can be reduced to (2): the second-order vector equation
+
(here  $  I _ {k} $
 +
is the  $  k \times k $
 +
identity matrix). Equation (2) with an arbitrary non-singular real skew-symmetric matrix $  J $
 +
may be reduced, by a suitable substitution  $  x = Sx _ {1} $,
 +
where  $  S $
 +
is a non-singular real matrix, to a similar form:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628021.png" /></td> <td valign="top" style="width:5%;text-align:right;">(3)</td></tr></table>
+
$$
 +
J _ {1}
 +
\frac{dx _ {1} }{dt }
 +
  = H _ {1} ( t) x,
 +
$$
  
in which <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628022.png" /> is a <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628023.png" />-th order vector, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628024.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628025.png" /> are real <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628026.png" />-matrix functions and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628027.png" />; the equation
+
where  $  J _ {1} $
 +
is any given real non-singular skew-symmetric matrix. It will be assumed that in (2)  $  | H( t) | \in L _ {1} [ t _ {1} , t _ {2} ] $,
 +
for all  $  - \infty < t _ {1} < t _ {2} < + \infty $.  
 +
The following equations can be reduced to (2): the second-order vector equation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628028.png" /></td> <td valign="top" style="width:5%;text-align:right;">(3a)</td></tr></table>
+
$$ \tag{3 }
 +
{
 +
\frac{d}{dt}
 +
} \left [
 +
R ( t) {
 +
\frac{dy}{dt}
 +
}
 +
\right ] + P ( t) y  =  0,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628029.png" /> is a constant matrix, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628030.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628031.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628032.png" /> (the matrices <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628033.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628034.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628035.png" /> are real); the scalar equation
+
in which  $  y $
 +
is a $  k $-
 +
th order vector, $  R( t) = R( t)  ^ {*} $
 +
and  $  P( t) = P( t)  ^ {*} $
 +
are real $  ( k \times k ) $-
 +
matrix functions and  $  \mathop{\rm det}  R ( t) \neq 0 $;  
 +
the equation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628036.png" /></td> <td valign="top" style="width:5%;text-align:right;">(4)</td></tr></table>
+
$$ \tag{3a }
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628037.png" /> are real functions, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628038.png" />; and the corresponding vector equation. For equation (3),
+
\frac{d}{dt }
 +
\left [
 +
R ( t)  
 +
\frac{dy }{dt }
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628039.png" /></td> </tr></table>
+
\right ] + Q
 +
\frac{dy }{dt }
 +
+
 +
P ( t) y  = 0,
 +
$$
 +
 
 +
where  $  Q = - Q  ^ {*} $
 +
is a constant matrix,  $  R( t) = R( t)  ^ {*} $,
 +
$  P( t) = P( t)  ^ {*} $,
 +
$  \mathop{\rm det}  R ( t) \neq 0 $(
 +
the matrices  $  P( t) $,
 +
$  Q $,
 +
$  R( t) $
 +
are real); the scalar equation
 +
 
 +
$$ \tag{4 }
 +
\sum _ {j = 0 } ^ { k }
 +
(- 1)  ^ {j}
 +
 
 +
\frac{d  ^ {j} }{dt  ^ {j} }
 +
 
 +
\left ( \phi _ {j} ( t)
 +
 
 +
\frac{d  ^ {j} \eta }{dt  ^ {j} }
 +
 
 +
\right )  =  0,
 +
$$
 +
 
 +
where  $  \phi _ {j} ( t) $
 +
are real functions,  $  \phi _ {k} ( t) \neq 0 $;  
 +
and the corresponding vector equation. For equation (3),
 +
 
 +
$$
 +
= \left \|
 +
\begin{array}{c}
 +
y \\
 +
z
 +
\end{array}
 +
\
 +
\right \| ,\ \
 +
= R {
 +
\frac{dy}{dt}
 +
} ;
 +
$$
  
 
for equation (3a),
 
for equation (3a),
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628040.png" /></td> </tr></table>
+
$$
 +
= \left \|
 +
\begin{array}{c}
 +
p \\
 +
q
 +
\end{array}
 +
\
 +
\right \| ,\ \
 +
\textrm{ where } \
 +
p = R {
 +
\frac{dy}{dt}
 +
} +
 +
{
 +
\frac{1}{2}
 +
} Qy,\
 +
q = y;
 +
$$
  
for equation (4), <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628041.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628042.png" />,
+
for equation (4), $  x _ {j} = \eta ^ {( j- 1) } $,  
 +
$  j = 1 \dots k $,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628043.png" /></td> </tr></table>
+
$$
 +
x _ {j + k }  = \
 +
\phi _ {j} x _ {j + 1 }  - x _ {k + j + 1 }  ^  \prime  ,\ \
 +
j = 1 \dots k - 1,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628044.png" /></td> </tr></table>
+
$$
 +
x _ {2k}  = \phi _ {k} x _ {k}  ^  \prime  .
 +
$$
  
The scalar equation (3) with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628045.png" />, i.e. the equation
+
The scalar equation (3) with $  R( t) = 1 $,  
 +
i.e. the equation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628046.png" /></td> </tr></table>
+
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628047.png" /> is a periodic function, is known as Hill's equation (cf. also [[Hill equation|Hill equation]]).
+
\frac{d  ^ {2} y }{dt  ^ {2} }
 +
+ P( t) y  = 0 ,
 +
$$
  
Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628048.png" /> be the evolution matrix of equation (2) (i.e. the matrix of a [[Fundamental system of solutions|fundamental system of solutions]] of equation (2), normalized by the condition <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628049.png" />). Introduce the indefinite scalar product <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628050.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628051.png" /> is the ordinary [[Inner product|inner product]]. A complex matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628052.png" /> which is unitary in the sense of this product (cf. also [[Unitary matrix|Unitary matrix]]), i.e. is such that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628053.png" />, is called <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628055.png" />-unitary; a real <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628056.png" />-unitary matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628057.png" /> is called symplectic.
+
where  $  P( t) $
 +
is a periodic function, is known as Hill's equation (cf. also [[Hill equation|Hill equation]]).
  
It is known (cf. [[Hamiltonian system|Hamiltonian system]]) that the Poincaré invariant — the exterior differential form <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628058.png" /> — is preserved during a motion along the trajectory of a Hamiltonian system. In the case of a linear Hamiltonian system this means that for any solutions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628059.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628060.png" /> of equation (2) one has <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628061.png" />, i.e. <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628062.png" /> is a symplectic matrix for any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628063.png" />. It follows from the relation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628064.png" /> that the eigen values of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628065.png" /> (counted with multiplicities and the orders of the Jordan cells) are symmetric (in the sense of inversion) with respect to the unit circle (the Lyapunov–Poincaré theorem). The eigen values of symplectic (and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628066.png" />-unitary) matrices which are equal in modulus to 1 are subdivided into eigen values of the first and second kind in accordance with the following rule. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628067.png" /> be an eigen value of a <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628068.png" />-unitary matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628069.png" /> and let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628070.png" />. Then the form <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628071.png" /> on the corresponding root subspace is non-degenerate. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628072.png" /> be the number of its positive and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628073.png" /> the number of its negative blocks; one says that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628074.png" /> eigen values of the first kind and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628075.png" /> eigen values of the second kind coincide at <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628076.png" />.
+
Let  $  X( t) $
 +
be the evolution matrix of equation (2) (i.e. the matrix of a [[Fundamental system of solutions|fundamental system of solutions]] of equation (2), normalized by the condition  $  X( 0) = I _ {n} $).  
 +
Introduce the indefinite scalar product  $  \langle  x, y \rangle = i( Jx, y) $,
 +
where  $  ( x, y) = \sum _ {j=} 0 ^ {2k} x _ {j} \overline{y}\; _ {j} $
 +
is the ordinary [[Inner product|inner product]]. A complex matrix  $  U $
 +
which is unitary in the sense of this product (cf. also [[Unitary matrix|Unitary matrix]]), i.e. is such that  $  U  ^ {*} JU = J $,
 +
is called  $  J $-
 +
unitary; a real  $  J $-
 +
unitary matrix  $  X $
 +
is called symplectic.
  
The kind of the purely-imaginary eigen values of the matrices <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628077.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628078.png" /> (for which <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628079.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628080.png" />) is defined in the same way. For a <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628081.png" />-unitary matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628082.png" /> the eigen values <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628083.png" /> for which <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628084.png" /> are called eigen values of the first kind if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628086.png" />, and eigen values of the second kind if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628088.png" />. Any symplectic matrix has (counted with multiplicities) exactly <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628089.png" /> eigen values <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628090.png" /> of the first kind and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628091.png" /> eigen values <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628092.png" /> of the second kind. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628093.png" /> are suitably numbered, they are continuous functions of the matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628094.png" /> [[#References|[2]]], [[#References|[3]]].
+
It is known (cf. [[Hamiltonian system|Hamiltonian system]]) that the Poincaré invariant — the exterior differential form  $  \sum _ {j=} 1  ^ {k} dp _ {j} \wedge dq _ {j} $—
 +
is preserved during a motion along the trajectory of a Hamiltonian system. In the case of a linear Hamiltonian system this means that for any solutions  $  x  ^ {(} 1) = x  ^ {(} 1) ( t) $,
 +
$  x  ^ {(} 2) = x  ^ {(} 2) ( t) $
 +
of equation (2) one has  $  \langle  x  ^ {(} 1) , x  ^ {(} 2) \rangle = \langle  X( t) x  ^ {(} 1) ( 0), X ( t) x  ^ {(} 2) ( 0) \rangle = \textrm{ const } $,
 +
i.e. $  X( t) $
 +
is a symplectic matrix for any  $  t $.  
 +
It follows from the relation  $  X  ^ {*} JX = J $
 +
that the eigen values of  $  X $(
 +
counted with multiplicities and the orders of the Jordan cells) are symmetric (in the sense of inversion) with respect to the unit circle (the Lyapunov–Poincaré theorem). The eigen values of symplectic (and  $  J $-
 +
unitary) matrices which are equal in modulus to 1 are subdivided into eigen values of the first and second kind in accordance with the following rule. Let  $  \rho $
 +
be an eigen value of a  $  J $-
 +
unitary matrix  $  U $
 +
and let  $  | \rho | = 1 $.  
 +
Then the form  $  \langle  x, x\rangle $
 +
on the corresponding root subspace is non-degenerate. Let  $  p $
 +
be the number of its positive and  $  q $
 +
the number of its negative blocks; one says that  $  p $
 +
eigen values of the first kind and  $  q $
 +
eigen values of the second kind coincide at  $  \rho $.
 +
 
 +
The kind of the purely-imaginary eigen values of the matrices  $  K = J  ^ {-} 1 L $,  
 +
$  L  ^ {*} = L $(
 +
for which  $  \langle  Kx, y \rangle = - \langle  x, Ky \rangle $,
 +
$  \forall x, y $)  
 +
is defined in the same way. For a $  J $-
 +
unitary matrix $  X $
 +
the eigen values $  \rho $
 +
for which $  | \rho | \neq 1 $
 +
are called eigen values of the first kind if $  | \rho | < 1 $,  
 +
and eigen values of the second kind if $  | \rho | > 1 $.  
 +
Any symplectic matrix has (counted with multiplicities) exactly $  k $
 +
eigen values $  \rho _ {1} \dots \rho _ {k} $
 +
of the first kind and $  k $
 +
eigen values $  \rho _ {1}  ^ {-} 1 \dots \rho _ {k}  ^ {-} 1 $
 +
of the second kind. If $  \rho _ {1} \dots \rho _ {k} $
 +
are suitably numbered, they are continuous functions of the matrix $  X $[[#References|[2]]], [[#References|[3]]].
  
 
==Oscillatory properties of solutions of linear Hamiltonian systems.==
 
==Oscillatory properties of solutions of linear Hamiltonian systems.==
 
The oscillatory properties of the solutions of equations (2)–(4) are involved in a number of problems in variational calculus, optimum control, studies on the properties of the spectrum of the corresponding differential operator, etc.
 
The oscillatory properties of the solutions of equations (2)–(4) are involved in a number of problems in variational calculus, optimum control, studies on the properties of the spectrum of the corresponding differential operator, etc.
  
Definitions. I) Equation (3) is called oscillatory if for any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628095.png" /> it is possible to find numbers <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628096.png" /> and a solution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628097.png" /> such that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628098.png" />, and is called non-oscillatory otherwise. II) Equation (4) is called oscillatory if for any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h04628099.png" /> it is possible to find a solution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280100.png" /> which has at least two zeros <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280101.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280102.png" />, of order <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280103.png" />, and is called non-oscillatory otherwise. III) Equation (1) is called oscillatory if the function
+
Definitions. I) Equation (3) is called oscillatory if for any $  t _ {0} > 0 $
 +
it is possible to find numbers $  t _ {2} > t _ {1} > t _ {0} $
 +
and a solution $  y( t) \not\equiv 0 $
 +
such that $  y ( t _ {1} ) = y ( t _ {2} ) = 0 $,  
 +
and is called non-oscillatory otherwise. II) Equation (4) is called oscillatory if for any $  t _ {0} > 0 $
 +
it is possible to find a solution $  \eta ( t) \not\equiv 0 $
 +
which has at least two zeros $  t _ {1} , t _ {2} $,
 +
$  t _ {2} > t _ {1} > t _ {0} $,  
 +
of order $  k $,  
 +
and is called non-oscillatory otherwise. III) Equation (1) is called oscillatory if the function
 +
 
 +
$$ \tag{5 }
 +
\Delta  \mathop{\rm Arg}  X ( t)  = \
 +
\sum _ {j = 1 } ^ { k }
 +
\Delta  \mathop{\rm Arg}  \rho _ {j} ( t)
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280104.png" /></td> <td valign="top" style="width:5%;text-align:right;">(5)</td></tr></table>
+
is unbounded on  $  ( t _ {0} , \infty ) $,
 +
and is called non-oscillatory otherwise. (In (5), the  $  \rho _ {j} ( t) $
 +
are the eigen values of  $  X( t) $
 +
of the first kind.) After equation (3) or (4) has been reduced to (2), equation (2) thus obtained will be oscillatory in the sense of III) if and only if equation (3) (or (4)) is oscillatory in the sense of definition I) (or II)). The following geometrical interpretation may be given to definition III). The group  $  \mathop{\rm Sp} ( k, R) $
 +
of symplectic matrices  $  X $
 +
is homeomorphic to the product of a connected and simply-connected topological space by the circle. The corresponding mapping may be so chosen that  $  \mathop{\rm exp} ( \sum _ {j=} 1  ^ {k}  \mathop{\rm Arg}  \rho _ {j} ) $
 +
is the projection of the matrix  $  X \in  \mathop{\rm Sp} ( k, R) $
 +
onto the circle (here the  $  \rho _ {j} $
 +
are the eigen values of the first kind of  $  X $).
 +
Thus, equation (2) is oscillatory if, for  $  t \rightarrow \infty $,
 +
$  X( t) $"
 +
winds unboundedly" in  $  \mathop{\rm Sp} ( k, R) $.  
 +
(If  $  n = 1 $,
 +
this group is homeomorphic to a  "solid torus" , and the  "winding" has a visual interpretation.) There exist various other definitions of the argument of a symplectic matrix, which correspond to other mappings of the group  $  \mathop{\rm Sp} ( k, R) $
 +
to the circle, and which are equivalent to (5) in the sense that they all satisfy the inequality
  
is unbounded on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280105.png" />, and is called non-oscillatory otherwise. (In (5), the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280106.png" /> are the eigen values of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280107.png" /> of the first kind.) After equation (3) or (4) has been reduced to (2), equation (2) thus obtained will be oscillatory in the sense of III) if and only if equation (3) (or (4)) is oscillatory in the sense of definition I) (or II)). The following geometrical interpretation may be given to definition III). The group <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280108.png" /> of symplectic matrices <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280109.png" /> is homeomorphic to the product of a connected and simply-connected topological space by the circle. The corresponding mapping may be so chosen that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280110.png" /> is the projection of the matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280111.png" /> onto the circle (here the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280112.png" /> are the eigen values of the first kind of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280113.png" />). Thus, equation (2) is oscillatory if, for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280114.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280115.png" />  "winds unboundedly" in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280116.png" />. (If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280117.png" />, this group is homeomorphic to a "solid torus" , and the  "winding"  has a visual interpretation.) There exist various other definitions of the argument of a symplectic matrix, which correspond to other mappings of the group <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280118.png" /> to the circle, and which are equivalent to (5) in the sense that they all satisfy the inequality
+
$$ \tag{6 }
 +
| \Delta  \mathop{\rm Arg}  ^  \prime  X ( t) - \Delta  \mathop{\rm Arg} \
 +
X ( t) | c
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280119.png" /></td> <td valign="top" style="width:5%;text-align:right;">(6)</td></tr></table>
+
for any curve  $  X( t) \in  \mathop{\rm Sp} ( k, R) $.
 +
Such arguments are, for example,
  
for any curve <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280120.png" />. Such arguments are, for example,
+
$$
 +
\mathop{\rm Arg} _ {1}  X  = \
 +
\mathop{\rm Arg}  \mathop{\rm det} ( U _ {1} - iV _ {1} ); \ \
 +
\mathop{\rm Arg} _ {2}  X  = \
 +
\mathop{\rm Arg}  \mathop{\rm det} ( U _ {2} - iV _ {2} ),
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280121.png" /></td> </tr></table>
+
where  $  U _ {j} , V _ {j} $
 +
are  $  ( k \times k ) $-
 +
submatrices of the matrix
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280122.png" /> are <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280123.png" />-submatrices of the matrix
+
$$
 +
= \left \|
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280124.png" /></td> </tr></table>
+
\begin{array}{cc}
 +
U _ {1}  &U _ {2}  \\
 +
V _ {1}  &V _ {2}  \\
 +
\end{array}
 +
\
 +
\right \|
 +
$$
  
 
(cf. [[#References|[6]]]). There exist various effectively-verifyable sufficient (and sometimes necessary and sufficient) conditions of oscillatority and non-oscillatority of equations (2), (3) and (4) (see, for example, [[#References|[5]]] and the references to [[#References|[6]]]).
 
(cf. [[#References|[6]]]). There exist various effectively-verifyable sufficient (and sometimes necessary and sufficient) conditions of oscillatority and non-oscillatority of equations (2), (3) and (4) (see, for example, [[#References|[5]]] and the references to [[#References|[6]]]).
  
 
==Linear Hamiltonian systems with periodic coefficients.==
 
==Linear Hamiltonian systems with periodic coefficients.==
Let, in (2), <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280125.png" /> almost-everywhere. The matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280126.png" /> is called the monodromy matrix of equation (2), and its eigen values are called the multipliers of (2). Equation (2) (or the corresponding Hamiltonian <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280127.png" />) is called strongly stable if all its solutions are bounded on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280128.png" />, and this property is preserved under small deformations of the Hamiltonian in the sense of the norm <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280129.png" />. Strong instability of equation (2) (of the Hamiltonian <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280130.png" />) is defined in an analogous manner. For equation (2) to be strongly stable it is necessary and sufficient that all its multipliers lie on the unit circle and that no two multipliers of different kinds coincide (in other words, that all root subspaces of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280131.png" /> be definite in the sense of the product <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280132.png" />). Equation (2) is strongly unstable if and only if some of its multipliers lie outside the unit circle. Two samples of multipliers (taken with their kinds) which do not include coincident multipliers of different kinds are called equivalent if one sample can be continuously converted into the other so that multipliers of different kinds do not meet. The class of equivalent samples is called a multiplier type. In the case of stability there are <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280133.png" /> multiplier types. They may be denoted by symbols of the form <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280134.png" /> in which the plus and minus signs correspond to the kind of multipliers which are successively encountered when moving along the upper half-circle <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280135.png" /> from the point <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280136.png" /> to the point <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280137.png" />. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280138.png" /> be the Banach space of all Hamiltonians of the above type with norm <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280139.png" />. The set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280140.png" /> of strongly-stable Hamiltonians breaks up in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280141.png" /> into a countable number of domains <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280142.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280143.png" />; <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280144.png" />. The domain <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280145.png" /> is the set of all Hamiltonians to which correspond the multiplier type <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280146.png" /> and the integer <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280147.png" />, defined by the formula
+
Let, in (2), $  H( t + T) = H( t) $
 +
almost-everywhere. The matrix $  X( T) $
 +
is called the monodromy matrix of equation (2), and its eigen values are called the multipliers of (2). Equation (2) (or the corresponding Hamiltonian $  H( t) $)  
 +
is called strongly stable if all its solutions are bounded on $  ( - \infty , + \infty ) $,  
 +
and this property is preserved under small deformations of the Hamiltonian in the sense of the norm $  \| H \| = \int _ {0}  ^ {T} | H( t) |  dt $.  
 +
Strong instability of equation (2) (of the Hamiltonian $  H( t) $)  
 +
is defined in an analogous manner. For equation (2) to be strongly stable it is necessary and sufficient that all its multipliers lie on the unit circle and that no two multipliers of different kinds coincide (in other words, that all root subspaces of $  X ( T) $
 +
be definite in the sense of the product $  \langle  x, y\rangle = i( Jx, y) $).  
 +
Equation (2) is strongly unstable if and only if some of its multipliers lie outside the unit circle. Two samples of multipliers (taken with their kinds) which do not include coincident multipliers of different kinds are called equivalent if one sample can be continuously converted into the other so that multipliers of different kinds do not meet. The class of equivalent samples is called a multiplier type. In the case of stability there are $  2  ^ {k} $
 +
multiplier types. They may be denoted by symbols of the form $  \mu = (+ , + , - , + \dots - ) $
 +
in which the plus and minus signs correspond to the kind of multipliers which are successively encountered when moving along the upper half-circle $  | \rho | = 1 $
 +
from the point $  \rho = + 1 $
 +
to the point $  \rho = - 1 $.  
 +
Let $  L = \{ H( t) \} $
 +
be the Banach space of all Hamiltonians of the above type with norm $  \| H \| = \int _ {0}  ^ {T} | H( t) |  dt $.  
 +
The set $  O \subset  L $
 +
of strongly-stable Hamiltonians breaks up in $  L $
 +
into a countable number of domains $  O _ {n} ^ {( \mu ) } $,
 +
$  n = 0, \pm  1, \pm  2 ,\dots $;  
 +
$  \mu = \mu _ {1} \dots \mu _ {2k} $.  
 +
The domain $  O _ {n} ^ {( \mu ) } $
 +
is the set of all Hamiltonians to which correspond the multiplier type $  \mu $
 +
and the integer $  n $,  
 +
defined by the formula
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280148.png" /></td> </tr></table>
+
$$
 +
\left . \Delta  \mathop{\rm Arg}  X ( t) \right | _ {0}  ^ {T}  = \
 +
2n \pi + \sum _ {j = 1 } ^ { k }  \theta _ {j} ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280149.png" /> are the arguments of the multipliers of the first kind [[#References|[4]]], [[#References|[7]]]. For <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280150.png" /> the set of strongly-unstable Hamiltonians breaks up into a countable number of domains; if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280151.png" /> this set is connected. Various sufficient conditions for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280152.png" /> are known, [[#References|[3]]], [[#References|[7]]], [[#References|[8]]]. Many of these conditions are obtained from the following theorem: Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280153.png" />; it then follows from the strong stability of the  "segment"  <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280154.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280155.png" />, that a Hamiltonian <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280156.png" /> for which <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280157.png" /> is strongly stable. A similar theorem has also been proved for the infinite-dimensional case <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280158.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280159.png" /> is a Hilbert space and, in (2), <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280160.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280161.png" /> are operators with special properties [[#References|[9]]]; if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280162.png" /> the theorem is valid for strongly-unstable Hamiltonians as well, [[#References|[3]]].
+
where $  \theta _ {j} = \mathop{\rm arg}  \rho _ {j} ( T) $
 +
are the arguments of the multipliers of the first kind [[#References|[4]]], [[#References|[7]]]. For $  k = 1 $
 +
the set of strongly-unstable Hamiltonians breaks up into a countable number of domains; if $  k > 1 $
 +
this set is connected. Various sufficient conditions for $  H( t) \in O _ {n} ^ {( \mu ) } $
 +
are known, [[#References|[3]]], [[#References|[7]]], [[#References|[8]]]. Many of these conditions are obtained from the following theorem: Let $  H _ {1} ( t) \leq  H _ {2} ( t) $;  
 +
it then follows from the strong stability of the  "segment"   $ H _ {s} ( t) = sH _ {1} ( t) +( 1 - s) H _ {2} ( t) $,  
 +
0 \leq  s \leq  1 $,  
 +
that a Hamiltonian $  H ( t) $
 +
for which $  H _ {1} ( t) \leq  H( t) \leq  H _ {2} ( t) $
 +
is strongly stable. A similar theorem has also been proved for the infinite-dimensional case $  ( k = \infty ) $,  
 +
where $  \{ x \} $
 +
is a Hilbert space and, in (2), $  J $
 +
and $  H ( t) $
 +
are operators with special properties [[#References|[9]]]; if $  k = 1 $
 +
the theorem is valid for strongly-unstable Hamiltonians as well, [[#References|[3]]].
  
 
==Parametric resonance.==
 
==Parametric resonance.==
 
Consider the equation
 
Consider the equation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280163.png" /></td> <td valign="top" style="width:5%;text-align:right;">(8)</td></tr></table>
+
$$ \tag{8 }
 +
J {
 +
\frac{dx}{dt}
 +
= H _ {0} x ,
 +
$$
  
with a constant Hamiltonian <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280164.png" /> such that all the solutions of equation (8) are bounded. A frequency <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280165.png" /> is said to be critical if for any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280166.png" /> there exists a  "perturbed"  Hamiltonian equation
+
with a constant Hamiltonian $  H _ {0} $
 +
such that all the solutions of equation (8) are bounded. A frequency $  \theta $
 +
is said to be critical if for any $  \delta > 0 $
 +
there exists a  "perturbed"  Hamiltonian equation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280167.png" /></td> <td valign="top" style="width:5%;text-align:right;">(9)</td></tr></table>
+
$$ \tag{9 }
 +
J {
 +
\frac{dx}{dt}
 +
= \
 +
H ( \theta t) x,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280168.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280169.png" />, such that equation (9) has unbounded solutions (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280170.png" /> may have any sign). The phenomenon when unbounded oscillations arise as a result of arbitrarily-small periodic perturbations of some of the system's parameters is called parametric resonance. Parametric resonance is of great importance in technology and in physics. It is more  "dangerous"  (or more  "useful" , depending on the problem) than ordinary resonance since, unlike to the latter, the oscillations increase exponentially (and not polynomially), and the resonance frequencies are not discrete but fill intervals. The lengths of these intervals depend on the amplitude of the perturbation, and the intervals themselves contract to single points (which correspond to critical frequencies) if the amplitude of the perturbation tends to zero. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280171.png" /> be the eigen values of the first kind of the matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280172.png" /> (then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280173.png" /> are those of the second kind). Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280174.png" /> (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280175.png" />). The critical frequencies are the numbers <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280176.png" /> (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280177.png" />; <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280178.png" />) and only these numbers [[#References|[2]]]. In (9), let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280179.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280180.png" /> is a small parameter and
+
where $  H ( t + 2 \pi ) = H ( t) $,
 +
$  \| H( t) - H _ {0} \| < \delta $,  
 +
such that equation (9) has unbounded solutions ( $  \theta $
 +
may have any sign). The phenomenon when unbounded oscillations arise as a result of arbitrarily-small periodic perturbations of some of the system's parameters is called parametric resonance. Parametric resonance is of great importance in technology and in physics. It is more  "dangerous"  (or more  "useful" , depending on the problem) than ordinary resonance since, unlike to the latter, the oscillations increase exponentially (and not polynomially), and the resonance frequencies are not discrete but fill intervals. The lengths of these intervals depend on the amplitude of the perturbation, and the intervals themselves contract to single points (which correspond to critical frequencies) if the amplitude of the perturbation tends to zero. Let $  i \omega _ {1} \dots i \omega _ {k} $
 +
be the eigen values of the first kind of the matrix $  J  ^ {-} 1 H _ {0} $(
 +
then $  - i \omega _ {1} \dots - i \omega _ {k} $
 +
are those of the second kind). Let $  \omega _ {j} + \omega _ {h} \neq 0 $(
 +
$  j, h = 1 \dots k $).  
 +
The critical frequencies are the numbers $  \theta _ {jh}  ^ {(} N) = ( \omega _ {j} + \omega _ {h} )/N $(
 +
$  j, h = 1 \dots k $;  
 +
$  N = \pm  1, \pm  2 ,\dots $)  
 +
and only these numbers [[#References|[2]]]. In (9), let $  H ( \theta t) = H _ {0} + \epsilon H _ {1} ( \theta t) $,  
 +
where $  \epsilon $
 +
is a small parameter and
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280181.png" /></td> </tr></table>
+
$$
 +
J ^ {- 1 } H _ {0} f _ {j}  = \
 +
i \omega _ {j} f _ {j} \ \
 +
( j = \pm  1 \dots \pm  k),\ \
 +
\omega _ {j}  = - \omega _ {j} ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280182.png" /></td> </tr></table>
+
$$
 +
H _ {1} ( \tau )  = \sum _ { m } H ^ {( m) } e ^ {im \tau } .
 +
$$
  
The vector system <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280183.png" /> may be chosen so that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280184.png" /> (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280185.png" />). In the  "general case"  the points <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280186.png" /> for which equation (9) with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280187.png" /> is strongly unstable fill near the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280188.png" />-axis the domains <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280189.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280190.png" />. The numbers <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280191.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280192.png" /> can be simply expressed in terms of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280193.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280194.png" /> (see, for example, [[#References|[3]]]).
+
The vector system $  \{ f _ {j} \} $
 +
may be chosen so that $  \langle  f _ {j} , f _ {h} \rangle = \delta _ {jh }    \mathop{\rm sign}  j $(
 +
$  j = \pm  1 \dots \pm  k $).  
 +
In the  "general case"  the points $  \{ \epsilon , \theta \} $
 +
for which equation (9) with $  H ( \theta t) = H _ {0} + \epsilon H _ {1} ( \theta t) $
 +
is strongly unstable fill near the $  \theta $-
 +
axis the domains $  \Omega _ {1} ( \epsilon ) < \theta - \theta _ {jh}  ^ {(} N) < \Omega _ {2} ( \epsilon ) $,  
 +
where $  \Omega _ {1  2 }  = \theta _ {jh}  ^ {(} N) + \epsilon \mu _ {1  2 }  + O ( \epsilon  ^ {3/2} ) $.  
 +
The numbers $  \mu _ {1} $,  
 +
$  \mu _ {2} $
 +
can be simply expressed in terms of $  H  ^ {(} m) $
 +
and $  f _ {j} $(
 +
see, for example, [[#References|[3]]]).
  
The magnitude <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280195.png" /> is a characterization of the  "degree of danger"  of the critical frequency <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280196.png" />: the higher its value, the wider the  "unstability wedge"  adjacent to the point <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280197.png" /> and the nearer to the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280198.png" />-axis the domain of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280199.png" />-exponential growth of the solution with a small <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280200.png" /> (for more details see [[#References|[3]]]). See also [[#References|[10]]], [[#References|[12]]], [[#References|[13]]] for other information.
+
The magnitude $  | ( H  ^ {(} N) f _ {j} , f _ {-} h ) | $
 +
is a characterization of the  "degree of danger"  of the critical frequency $  \theta _ {jh}  ^ {(} N) $:  
 +
the higher its value, the wider the  "unstability wedge"  adjacent to the point $  ( 0, \theta _ {jh}  ^ {(} N) ) $
 +
and the nearer to the $  \theta $-
 +
axis the domain of $  \alpha $-
 +
exponential growth of the solution with a small $  \alpha > 0 $(
 +
for more details see [[#References|[3]]]). See also [[#References|[10]]], [[#References|[12]]], [[#References|[13]]] for other information.
  
Results similar to the above have been obtained for equations (1) with complex coefficients (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280201.png" /> is a Hermitian matrix function, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280202.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280203.png" />; see, for example, [[#References|[11]]]). A more general system
+
Results similar to the above have been obtained for equations (1) with complex coefficients ( $  H( t) $
 +
is a Hermitian matrix function, $  J  ^ {*} = - J $,  
 +
$  \mathop{\rm det}  J \neq 0 $;  
 +
see, for example, [[#References|[11]]]). A more general system
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280204.png" /></td> </tr></table>
+
$$
 +
Q ( t) {
 +
\frac{dy}{dt}
 +
= \
 +
\left [ S ( t) - {
 +
\frac{1}{2}
 +
} {
 +
\frac{dQ}{dt}
 +
} \right ] y,
 +
$$
  
 
where
 
where
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280205.png" /></td> </tr></table>
+
$$
 +
Q ( t)  ^ {*}  = - Q ( t),\ \
 +
S ( t)  ^ {*}  = S ( t),\ \
 +
\mathop{\rm det}  Q ( t)  \neq  0,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046280/h046280206.png" /></td> </tr></table>
+
$$
 +
Q ( t + T)  = Q ( t),\  S ( t + T)  = S ( t),
 +
$$
  
 
is considered in [[#References|[14]]]. It was found that the number of domains of stability is finite both in the real and in the complex case; a characterization of these domains was obtained in terms of the properties of the solutions of the corresponding equations.
 
is considered in [[#References|[14]]]. It was found that the number of domains of stability is finite both in the real and in the complex case; a characterization of these domains was obtained in terms of the properties of the solutions of the corresponding equations.

Revision as of 19:43, 5 June 2020


A system

$$ \tag{1 } \frac{dp _ {j} }{dt } = \ \frac{\partial {\mathcal H} }{\partial q _ {j} } ,\ \ \frac{dq _ {j} }{dt } = \ - \frac{\partial {\mathcal H} }{\partial p _ {j} } ,\ \ j = 1 \dots k, $$

where $ {\mathcal H} $ is a quadratic form in the variables $ p _ {1} \dots p _ {k} $, $ q _ {1} \dots q _ {k} $ with real coefficients which may depend on the time $ t $. A linear Hamiltonian system is also called a linear canonical system. The system (1) may be written as a Hamiltonian vector equation

$$ \tag{2 } J \frac{dx }{dt } = H ( t) x, $$

where $ x $ is the column vector $ ( p _ {1} \dots p _ {k} , q _ {1} \dots q _ {k} ) $, $ H( t) = H( t) ^ {*} $ is the matrix of the quadratic form $ 2 {\mathcal H} $ and

$$ J = \left \| \begin{array}{rc} 0 &I _ {k} \\ - I _ {k} & 0 \\ \end{array} \ \right \| $$

(here $ I _ {k} $ is the $ k \times k $ identity matrix). Equation (2) with an arbitrary non-singular real skew-symmetric matrix $ J $ may be reduced, by a suitable substitution $ x = Sx _ {1} $, where $ S $ is a non-singular real matrix, to a similar form:

$$ J _ {1} \frac{dx _ {1} }{dt } = H _ {1} ( t) x, $$

where $ J _ {1} $ is any given real non-singular skew-symmetric matrix. It will be assumed that in (2) $ | H( t) | \in L _ {1} [ t _ {1} , t _ {2} ] $, for all $ - \infty < t _ {1} < t _ {2} < + \infty $. The following equations can be reduced to (2): the second-order vector equation

$$ \tag{3 } { \frac{d}{dt} } \left [ R ( t) { \frac{dy}{dt} } \right ] + P ( t) y = 0, $$

in which $ y $ is a $ k $- th order vector, $ R( t) = R( t) ^ {*} $ and $ P( t) = P( t) ^ {*} $ are real $ ( k \times k ) $- matrix functions and $ \mathop{\rm det} R ( t) \neq 0 $; the equation

$$ \tag{3a } \frac{d}{dt } \left [ R ( t) \frac{dy }{dt } \right ] + Q \frac{dy }{dt } + P ( t) y = 0, $$

where $ Q = - Q ^ {*} $ is a constant matrix, $ R( t) = R( t) ^ {*} $, $ P( t) = P( t) ^ {*} $, $ \mathop{\rm det} R ( t) \neq 0 $( the matrices $ P( t) $, $ Q $, $ R( t) $ are real); the scalar equation

$$ \tag{4 } \sum _ {j = 0 } ^ { k } (- 1) ^ {j} \frac{d ^ {j} }{dt ^ {j} } \left ( \phi _ {j} ( t) \frac{d ^ {j} \eta }{dt ^ {j} } \right ) = 0, $$

where $ \phi _ {j} ( t) $ are real functions, $ \phi _ {k} ( t) \neq 0 $; and the corresponding vector equation. For equation (3),

$$ x = \left \| \begin{array}{c} y \\ z \end{array} \ \right \| ,\ \ z = R { \frac{dy}{dt} } ; $$

for equation (3a),

$$ x = \left \| \begin{array}{c} p \\ q \end{array} \ \right \| ,\ \ \textrm{ where } \ p = R { \frac{dy}{dt} } + { \frac{1}{2} } Qy,\ q = y; $$

for equation (4), $ x _ {j} = \eta ^ {( j- 1) } $, $ j = 1 \dots k $,

$$ x _ {j + k } = \ \phi _ {j} x _ {j + 1 } - x _ {k + j + 1 } ^ \prime ,\ \ j = 1 \dots k - 1, $$

$$ x _ {2k} = \phi _ {k} x _ {k} ^ \prime . $$

The scalar equation (3) with $ R( t) = 1 $, i.e. the equation

$$ \frac{d ^ {2} y }{dt ^ {2} } + P( t) y = 0 , $$

where $ P( t) $ is a periodic function, is known as Hill's equation (cf. also Hill equation).

Let $ X( t) $ be the evolution matrix of equation (2) (i.e. the matrix of a fundamental system of solutions of equation (2), normalized by the condition $ X( 0) = I _ {n} $). Introduce the indefinite scalar product $ \langle x, y \rangle = i( Jx, y) $, where $ ( x, y) = \sum _ {j=} 0 ^ {2k} x _ {j} \overline{y}\; _ {j} $ is the ordinary inner product. A complex matrix $ U $ which is unitary in the sense of this product (cf. also Unitary matrix), i.e. is such that $ U ^ {*} JU = J $, is called $ J $- unitary; a real $ J $- unitary matrix $ X $ is called symplectic.

It is known (cf. Hamiltonian system) that the Poincaré invariant — the exterior differential form $ \sum _ {j=} 1 ^ {k} dp _ {j} \wedge dq _ {j} $— is preserved during a motion along the trajectory of a Hamiltonian system. In the case of a linear Hamiltonian system this means that for any solutions $ x ^ {(} 1) = x ^ {(} 1) ( t) $, $ x ^ {(} 2) = x ^ {(} 2) ( t) $ of equation (2) one has $ \langle x ^ {(} 1) , x ^ {(} 2) \rangle = \langle X( t) x ^ {(} 1) ( 0), X ( t) x ^ {(} 2) ( 0) \rangle = \textrm{ const } $, i.e. $ X( t) $ is a symplectic matrix for any $ t $. It follows from the relation $ X ^ {*} JX = J $ that the eigen values of $ X $( counted with multiplicities and the orders of the Jordan cells) are symmetric (in the sense of inversion) with respect to the unit circle (the Lyapunov–Poincaré theorem). The eigen values of symplectic (and $ J $- unitary) matrices which are equal in modulus to 1 are subdivided into eigen values of the first and second kind in accordance with the following rule. Let $ \rho $ be an eigen value of a $ J $- unitary matrix $ U $ and let $ | \rho | = 1 $. Then the form $ \langle x, x\rangle $ on the corresponding root subspace is non-degenerate. Let $ p $ be the number of its positive and $ q $ the number of its negative blocks; one says that $ p $ eigen values of the first kind and $ q $ eigen values of the second kind coincide at $ \rho $.

The kind of the purely-imaginary eigen values of the matrices $ K = J ^ {-} 1 L $, $ L ^ {*} = L $( for which $ \langle Kx, y \rangle = - \langle x, Ky \rangle $, $ \forall x, y $) is defined in the same way. For a $ J $- unitary matrix $ X $ the eigen values $ \rho $ for which $ | \rho | \neq 1 $ are called eigen values of the first kind if $ | \rho | < 1 $, and eigen values of the second kind if $ | \rho | > 1 $. Any symplectic matrix has (counted with multiplicities) exactly $ k $ eigen values $ \rho _ {1} \dots \rho _ {k} $ of the first kind and $ k $ eigen values $ \rho _ {1} ^ {-} 1 \dots \rho _ {k} ^ {-} 1 $ of the second kind. If $ \rho _ {1} \dots \rho _ {k} $ are suitably numbered, they are continuous functions of the matrix $ X $[2], [3].

Oscillatory properties of solutions of linear Hamiltonian systems.

The oscillatory properties of the solutions of equations (2)–(4) are involved in a number of problems in variational calculus, optimum control, studies on the properties of the spectrum of the corresponding differential operator, etc.

Definitions. I) Equation (3) is called oscillatory if for any $ t _ {0} > 0 $ it is possible to find numbers $ t _ {2} > t _ {1} > t _ {0} $ and a solution $ y( t) \not\equiv 0 $ such that $ y ( t _ {1} ) = y ( t _ {2} ) = 0 $, and is called non-oscillatory otherwise. II) Equation (4) is called oscillatory if for any $ t _ {0} > 0 $ it is possible to find a solution $ \eta ( t) \not\equiv 0 $ which has at least two zeros $ t _ {1} , t _ {2} $, $ t _ {2} > t _ {1} > t _ {0} $, of order $ k $, and is called non-oscillatory otherwise. III) Equation (1) is called oscillatory if the function

$$ \tag{5 } \Delta \mathop{\rm Arg} X ( t) = \ \sum _ {j = 1 } ^ { k } \Delta \mathop{\rm Arg} \rho _ {j} ( t) $$

is unbounded on $ ( t _ {0} , \infty ) $, and is called non-oscillatory otherwise. (In (5), the $ \rho _ {j} ( t) $ are the eigen values of $ X( t) $ of the first kind.) After equation (3) or (4) has been reduced to (2), equation (2) thus obtained will be oscillatory in the sense of III) if and only if equation (3) (or (4)) is oscillatory in the sense of definition I) (or II)). The following geometrical interpretation may be given to definition III). The group $ \mathop{\rm Sp} ( k, R) $ of symplectic matrices $ X $ is homeomorphic to the product of a connected and simply-connected topological space by the circle. The corresponding mapping may be so chosen that $ \mathop{\rm exp} ( \sum _ {j=} 1 ^ {k} \mathop{\rm Arg} \rho _ {j} ) $ is the projection of the matrix $ X \in \mathop{\rm Sp} ( k, R) $ onto the circle (here the $ \rho _ {j} $ are the eigen values of the first kind of $ X $). Thus, equation (2) is oscillatory if, for $ t \rightarrow \infty $, $ X( t) $" winds unboundedly" in $ \mathop{\rm Sp} ( k, R) $. (If $ n = 1 $, this group is homeomorphic to a "solid torus" , and the "winding" has a visual interpretation.) There exist various other definitions of the argument of a symplectic matrix, which correspond to other mappings of the group $ \mathop{\rm Sp} ( k, R) $ to the circle, and which are equivalent to (5) in the sense that they all satisfy the inequality

$$ \tag{6 } | \Delta \mathop{\rm Arg} ^ \prime X ( t) - \Delta \mathop{\rm Arg} \ X ( t) | < c $$

for any curve $ X( t) \in \mathop{\rm Sp} ( k, R) $. Such arguments are, for example,

$$ \mathop{\rm Arg} _ {1} X = \ \mathop{\rm Arg} \mathop{\rm det} ( U _ {1} - iV _ {1} ); \ \ \mathop{\rm Arg} _ {2} X = \ \mathop{\rm Arg} \mathop{\rm det} ( U _ {2} - iV _ {2} ), $$

where $ U _ {j} , V _ {j} $ are $ ( k \times k ) $- submatrices of the matrix

$$ X = \left \| \begin{array}{cc} U _ {1} &U _ {2} \\ V _ {1} &V _ {2} \\ \end{array} \ \right \| $$

(cf. [6]). There exist various effectively-verifyable sufficient (and sometimes necessary and sufficient) conditions of oscillatority and non-oscillatority of equations (2), (3) and (4) (see, for example, [5] and the references to [6]).

Linear Hamiltonian systems with periodic coefficients.

Let, in (2), $ H( t + T) = H( t) $ almost-everywhere. The matrix $ X( T) $ is called the monodromy matrix of equation (2), and its eigen values are called the multipliers of (2). Equation (2) (or the corresponding Hamiltonian $ H( t) $) is called strongly stable if all its solutions are bounded on $ ( - \infty , + \infty ) $, and this property is preserved under small deformations of the Hamiltonian in the sense of the norm $ \| H \| = \int _ {0} ^ {T} | H( t) | dt $. Strong instability of equation (2) (of the Hamiltonian $ H( t) $) is defined in an analogous manner. For equation (2) to be strongly stable it is necessary and sufficient that all its multipliers lie on the unit circle and that no two multipliers of different kinds coincide (in other words, that all root subspaces of $ X ( T) $ be definite in the sense of the product $ \langle x, y\rangle = i( Jx, y) $). Equation (2) is strongly unstable if and only if some of its multipliers lie outside the unit circle. Two samples of multipliers (taken with their kinds) which do not include coincident multipliers of different kinds are called equivalent if one sample can be continuously converted into the other so that multipliers of different kinds do not meet. The class of equivalent samples is called a multiplier type. In the case of stability there are $ 2 ^ {k} $ multiplier types. They may be denoted by symbols of the form $ \mu = (+ , + , - , + \dots - ) $ in which the plus and minus signs correspond to the kind of multipliers which are successively encountered when moving along the upper half-circle $ | \rho | = 1 $ from the point $ \rho = + 1 $ to the point $ \rho = - 1 $. Let $ L = \{ H( t) \} $ be the Banach space of all Hamiltonians of the above type with norm $ \| H \| = \int _ {0} ^ {T} | H( t) | dt $. The set $ O \subset L $ of strongly-stable Hamiltonians breaks up in $ L $ into a countable number of domains $ O _ {n} ^ {( \mu ) } $, $ n = 0, \pm 1, \pm 2 ,\dots $; $ \mu = \mu _ {1} \dots \mu _ {2k} $. The domain $ O _ {n} ^ {( \mu ) } $ is the set of all Hamiltonians to which correspond the multiplier type $ \mu $ and the integer $ n $, defined by the formula

$$ \left . \Delta \mathop{\rm Arg} X ( t) \right | _ {0} ^ {T} = \ 2n \pi + \sum _ {j = 1 } ^ { k } \theta _ {j} , $$

where $ \theta _ {j} = \mathop{\rm arg} \rho _ {j} ( T) $ are the arguments of the multipliers of the first kind [4], [7]. For $ k = 1 $ the set of strongly-unstable Hamiltonians breaks up into a countable number of domains; if $ k > 1 $ this set is connected. Various sufficient conditions for $ H( t) \in O _ {n} ^ {( \mu ) } $ are known, [3], [7], [8]. Many of these conditions are obtained from the following theorem: Let $ H _ {1} ( t) \leq H _ {2} ( t) $; it then follows from the strong stability of the "segment" $ H _ {s} ( t) = sH _ {1} ( t) +( 1 - s) H _ {2} ( t) $, $ 0 \leq s \leq 1 $, that a Hamiltonian $ H ( t) $ for which $ H _ {1} ( t) \leq H( t) \leq H _ {2} ( t) $ is strongly stable. A similar theorem has also been proved for the infinite-dimensional case $ ( k = \infty ) $, where $ \{ x \} $ is a Hilbert space and, in (2), $ J $ and $ H ( t) $ are operators with special properties [9]; if $ k = 1 $ the theorem is valid for strongly-unstable Hamiltonians as well, [3].

Parametric resonance.

Consider the equation

$$ \tag{8 } J { \frac{dx}{dt} } = H _ {0} x , $$

with a constant Hamiltonian $ H _ {0} $ such that all the solutions of equation (8) are bounded. A frequency $ \theta $ is said to be critical if for any $ \delta > 0 $ there exists a "perturbed" Hamiltonian equation

$$ \tag{9 } J { \frac{dx}{dt} } = \ H ( \theta t) x, $$

where $ H ( t + 2 \pi ) = H ( t) $, $ \| H( t) - H _ {0} \| < \delta $, such that equation (9) has unbounded solutions ( $ \theta $ may have any sign). The phenomenon when unbounded oscillations arise as a result of arbitrarily-small periodic perturbations of some of the system's parameters is called parametric resonance. Parametric resonance is of great importance in technology and in physics. It is more "dangerous" (or more "useful" , depending on the problem) than ordinary resonance since, unlike to the latter, the oscillations increase exponentially (and not polynomially), and the resonance frequencies are not discrete but fill intervals. The lengths of these intervals depend on the amplitude of the perturbation, and the intervals themselves contract to single points (which correspond to critical frequencies) if the amplitude of the perturbation tends to zero. Let $ i \omega _ {1} \dots i \omega _ {k} $ be the eigen values of the first kind of the matrix $ J ^ {-} 1 H _ {0} $( then $ - i \omega _ {1} \dots - i \omega _ {k} $ are those of the second kind). Let $ \omega _ {j} + \omega _ {h} \neq 0 $( $ j, h = 1 \dots k $). The critical frequencies are the numbers $ \theta _ {jh} ^ {(} N) = ( \omega _ {j} + \omega _ {h} )/N $( $ j, h = 1 \dots k $; $ N = \pm 1, \pm 2 ,\dots $) and only these numbers [2]. In (9), let $ H ( \theta t) = H _ {0} + \epsilon H _ {1} ( \theta t) $, where $ \epsilon $ is a small parameter and

$$ J ^ {- 1 } H _ {0} f _ {j} = \ i \omega _ {j} f _ {j} \ \ ( j = \pm 1 \dots \pm k),\ \ \omega _ {j} = - \omega _ {j} , $$

$$ H _ {1} ( \tau ) = \sum _ { m } H ^ {( m) } e ^ {im \tau } . $$

The vector system $ \{ f _ {j} \} $ may be chosen so that $ \langle f _ {j} , f _ {h} \rangle = \delta _ {jh } \mathop{\rm sign} j $( $ j = \pm 1 \dots \pm k $). In the "general case" the points $ \{ \epsilon , \theta \} $ for which equation (9) with $ H ( \theta t) = H _ {0} + \epsilon H _ {1} ( \theta t) $ is strongly unstable fill near the $ \theta $- axis the domains $ \Omega _ {1} ( \epsilon ) < \theta - \theta _ {jh} ^ {(} N) < \Omega _ {2} ( \epsilon ) $, where $ \Omega _ {1 2 } = \theta _ {jh} ^ {(} N) + \epsilon \mu _ {1 2 } + O ( \epsilon ^ {3/2} ) $. The numbers $ \mu _ {1} $, $ \mu _ {2} $ can be simply expressed in terms of $ H ^ {(} m) $ and $ f _ {j} $( see, for example, [3]).

The magnitude $ | ( H ^ {(} N) f _ {j} , f _ {-} h ) | $ is a characterization of the "degree of danger" of the critical frequency $ \theta _ {jh} ^ {(} N) $: the higher its value, the wider the "unstability wedge" adjacent to the point $ ( 0, \theta _ {jh} ^ {(} N) ) $ and the nearer to the $ \theta $- axis the domain of $ \alpha $- exponential growth of the solution with a small $ \alpha > 0 $( for more details see [3]). See also [10], [12], [13] for other information.

Results similar to the above have been obtained for equations (1) with complex coefficients ( $ H( t) $ is a Hermitian matrix function, $ J ^ {*} = - J $, $ \mathop{\rm det} J \neq 0 $; see, for example, [11]). A more general system

$$ Q ( t) { \frac{dy}{dt} } = \ \left [ S ( t) - { \frac{1}{2} } { \frac{dQ}{dt} } \right ] y, $$

where

$$ Q ( t) ^ {*} = - Q ( t),\ \ S ( t) ^ {*} = S ( t),\ \ \mathop{\rm det} Q ( t) \neq 0, $$

$$ Q ( t + T) = Q ( t),\ S ( t + T) = S ( t), $$

is considered in [14]. It was found that the number of domains of stability is finite both in the real and in the complex case; a characterization of these domains was obtained in terms of the properties of the solutions of the corresponding equations.

A number of similar results were also obtained for operator equations (2) with bounded and unbounded operator coefficients in a Hilbert space [15], [16].

References

[1] A.M. [A.M. Lyapunov] Liapunoff, "Problème général de la stabilité du mouvement" , Princeton Univ. Press (1947) (Translated from Russian)
[2] M.G. Krein, "Foundations of the theory of -zones of stability of a canonical system of linear differential equations with periodic coefficients" Transl. Amer. Math. Soc. (2) , 120 (1983) pp. 1–70 In memoriam: A.A. Andronov (1955) pp. 413–498
[3] V.A. Yakubovich, V.M. Starzhinskii, "Linear differential equations with periodic coefficients" , Wiley (1975) (Translated from Russian)
[4] I.M. Gel'fand, V.B. Lidskii, "On the structure of stability regions of linear canonical systems of differential equations with periodic coefficients" Uspekhi Mat. Nauk , 10 : 1 (1955) pp. 3–40 (In Russian)
[5] R.L. Sternberg, "Variational methods and non-oscillation theorems for systems of differential equations" Duke Math. J. , 19 (1952) pp. 311–322
[6] V.A. Yakubovich, "Oscillatory properties of solutions of canonical equations" Mat. Sb. , 56 (98) : 1 (1962) pp. 3–42 (In Russian)
[7] M.G. Krein, "Hamiltonian systems of linear differential equations with periodic coefficients" Transl. Amer. Math. Soc. (2) , 120 (1983) pp. 139–168 Proc. Intern. Symp. Non-Linear Oscillations , 1 (1963) pp. 277–305
[8] V.B. Lidskii, "Oscillation theorems for canonical systems of differential equations" Dokl. Akad. Nauk SSSR , 102 : 5 (1955) pp. 877–880 (In Russian)
[9] V.M. Derguzov, "On stability of the solutions of the Hamilton equation with unbounded periodic operator coefficients" Mat. Sb. , 63 (105) : 4 (1964) pp. 591–619 (In Russian)
[10] J. Moser, "New aspects in the theory of stability of Hamiltonian systems" Comm. Pure Appl. Math. , 11 (1958) pp. 81–114
[11] W.A. Coppel, A. Howe, "On the stability of linear canonical systems with periodic coefficients" J. Austral. Math. Soc. , 5 (1965) pp. 169–195
[12] Yu.A. Mitropol'skii, "An averaging method in non-linear mechanics" , Kiev (1971) (In Russian)
[13] N.P. Erugin, "Linear systems of ordinary differential equations with periodic and quasi-periodic coefficients" , Acad. Press (1966) (Translated from Russian)
[14] V.B. Lidskii, P.A. Frolov, "The structure of the domain of stability of a self-adjoint system of differential equations with periodic coefficients" Mat. Sb. , 71 (113) : 1 (1966) pp. 48–64 (In Russian)
[15] Yu.L. Daletskii, M.G. Krein, "Stability of solutions of differential equations in Banach space" , Amer. Math. Soc. (1974) (Translated from Russian)
[16] V.N. Fomin, "Mathematical theory of parameter resonance in linear distributed systems" , Leningrad (1972) (In Russian)
How to Cite This Entry:
Hamiltonian system, linear. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Hamiltonian_system,_linear&oldid=17683
This article was adapted from an original article by V.A. Yakubovich (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article