# Hamiltonian system, linear

A system

$$\tag{1 } \frac{dp _ {j} }{dt } = \ \frac{\partial {\mathcal H} }{\partial q _ {j} } ,\ \ \frac{dq _ {j} }{dt } = \ - \frac{\partial {\mathcal H} }{\partial p _ {j} } ,\ \ j = 1 \dots k,$$

where ${\mathcal H}$ is a quadratic form in the variables $p _ {1} \dots p _ {k}$, $q _ {1} \dots q _ {k}$ with real coefficients which may depend on the time $t$. A linear Hamiltonian system is also called a linear canonical system. The system (1) may be written as a Hamiltonian vector equation

$$\tag{2 } J \frac{dx }{dt } = H ( t) x,$$

where $x$ is the column vector $( p _ {1} \dots p _ {k} , q _ {1} \dots q _ {k} )$, $H( t) = H( t) ^ {*}$ is the matrix of the quadratic form $2 {\mathcal H}$ and

$$J = \left \| \begin{array}{rc} 0 &I _ {k} \\ - I _ {k} & 0 \\ \end{array} \ \right \|$$

(here $I _ {k}$ is the $k \times k$ identity matrix). Equation (2) with an arbitrary non-singular real skew-symmetric matrix $J$ may be reduced, by a suitable substitution $x = Sx _ {1}$, where $S$ is a non-singular real matrix, to a similar form:

$$J _ {1} \frac{dx _ {1} }{dt } = H _ {1} ( t) x,$$

where $J _ {1}$ is any given real non-singular skew-symmetric matrix. It will be assumed that in (2) $| H( t) | \in L _ {1} [ t _ {1} , t _ {2} ]$, for all $- \infty < t _ {1} < t _ {2} < + \infty$. The following equations can be reduced to (2): the second-order vector equation

$$\tag{3 } { \frac{d}{dt} } \left [ R ( t) { \frac{dy}{dt} } \right ] + P ( t) y = 0,$$

in which $y$ is a $k$- th order vector, $R( t) = R( t) ^ {*}$ and $P( t) = P( t) ^ {*}$ are real $( k \times k )$- matrix functions and $\mathop{\rm det} R ( t) \neq 0$; the equation

$$\tag{3a } \frac{d}{dt } \left [ R ( t) \frac{dy }{dt } \right ] + Q \frac{dy }{dt } + P ( t) y = 0,$$

where $Q = - Q ^ {*}$ is a constant matrix, $R( t) = R( t) ^ {*}$, $P( t) = P( t) ^ {*}$, $\mathop{\rm det} R ( t) \neq 0$( the matrices $P( t)$, $Q$, $R( t)$ are real); the scalar equation

$$\tag{4 } \sum _ {j = 0 } ^ { k } (- 1) ^ {j} \frac{d ^ {j} }{dt ^ {j} } \left ( \phi _ {j} ( t) \frac{d ^ {j} \eta }{dt ^ {j} } \right ) = 0,$$

where $\phi _ {j} ( t)$ are real functions, $\phi _ {k} ( t) \neq 0$; and the corresponding vector equation. For equation (3),

$$x = \left \| \begin{array}{c} y \\ z \end{array} \ \right \| ,\ \ z = R { \frac{dy}{dt} } ;$$

for equation (3a),

$$x = \left \| \begin{array}{c} p \\ q \end{array} \ \right \| ,\ \ \textrm{ where } \ p = R { \frac{dy}{dt} } + { \frac{1}{2} } Qy,\ q = y;$$

for equation (4), $x _ {j} = \eta ^ {( j- 1) }$, $j = 1 \dots k$,

$$x _ {j + k } = \ \phi _ {j} x _ {j + 1 } - x _ {k + j + 1 } ^ \prime ,\ \ j = 1 \dots k - 1,$$

$$x _ {2k} = \phi _ {k} x _ {k} ^ \prime .$$

The scalar equation (3) with $R( t) = 1$, i.e. the equation

$$\frac{d ^ {2} y }{dt ^ {2} } + P( t) y = 0 ,$$

where $P( t)$ is a periodic function, is known as Hill's equation (cf. also Hill equation).

Let $X( t)$ be the evolution matrix of equation (2) (i.e. the matrix of a fundamental system of solutions of equation (2), normalized by the condition $X( 0) = I _ {n}$). Introduce the indefinite scalar product $\langle x, y \rangle = i( Jx, y)$, where $( x, y) = \sum _ {j=} 0 ^ {2k} x _ {j} \overline{y}\; _ {j}$ is the ordinary inner product. A complex matrix $U$ which is unitary in the sense of this product (cf. also Unitary matrix), i.e. is such that $U ^ {*} JU = J$, is called $J$- unitary; a real $J$- unitary matrix $X$ is called symplectic.

It is known (cf. Hamiltonian system) that the Poincaré invariant — the exterior differential form $\sum _ {j=} 1 ^ {k} dp _ {j} \wedge dq _ {j}$— is preserved during a motion along the trajectory of a Hamiltonian system. In the case of a linear Hamiltonian system this means that for any solutions $x ^ {(} 1) = x ^ {(} 1) ( t)$, $x ^ {(} 2) = x ^ {(} 2) ( t)$ of equation (2) one has $\langle x ^ {(} 1) , x ^ {(} 2) \rangle = \langle X( t) x ^ {(} 1) ( 0), X ( t) x ^ {(} 2) ( 0) \rangle = \textrm{ const }$, i.e. $X( t)$ is a symplectic matrix for any $t$. It follows from the relation $X ^ {*} JX = J$ that the eigen values of $X$( counted with multiplicities and the orders of the Jordan cells) are symmetric (in the sense of inversion) with respect to the unit circle (the Lyapunov–Poincaré theorem). The eigen values of symplectic (and $J$- unitary) matrices which are equal in modulus to 1 are subdivided into eigen values of the first and second kind in accordance with the following rule. Let $\rho$ be an eigen value of a $J$- unitary matrix $U$ and let $| \rho | = 1$. Then the form $\langle x, x\rangle$ on the corresponding root subspace is non-degenerate. Let $p$ be the number of its positive and $q$ the number of its negative blocks; one says that $p$ eigen values of the first kind and $q$ eigen values of the second kind coincide at $\rho$.

The kind of the purely-imaginary eigen values of the matrices $K = J ^ {-} 1 L$, $L ^ {*} = L$( for which $\langle Kx, y \rangle = - \langle x, Ky \rangle$, $\forall x, y$) is defined in the same way. For a $J$- unitary matrix $X$ the eigen values $\rho$ for which $| \rho | \neq 1$ are called eigen values of the first kind if $| \rho | < 1$, and eigen values of the second kind if $| \rho | > 1$. Any symplectic matrix has (counted with multiplicities) exactly $k$ eigen values $\rho _ {1} \dots \rho _ {k}$ of the first kind and $k$ eigen values $\rho _ {1} ^ {-} 1 \dots \rho _ {k} ^ {-} 1$ of the second kind. If $\rho _ {1} \dots \rho _ {k}$ are suitably numbered, they are continuous functions of the matrix $X$, .

## Oscillatory properties of solutions of linear Hamiltonian systems.

The oscillatory properties of the solutions of equations (2)–(4) are involved in a number of problems in variational calculus, optimum control, studies on the properties of the spectrum of the corresponding differential operator, etc.

Definitions. I) Equation (3) is called oscillatory if for any $t _ {0} > 0$ it is possible to find numbers $t _ {2} > t _ {1} > t _ {0}$ and a solution $y( t) \not\equiv 0$ such that $y ( t _ {1} ) = y ( t _ {2} ) = 0$, and is called non-oscillatory otherwise. II) Equation (4) is called oscillatory if for any $t _ {0} > 0$ it is possible to find a solution $\eta ( t) \not\equiv 0$ which has at least two zeros $t _ {1} , t _ {2}$, $t _ {2} > t _ {1} > t _ {0}$, of order $k$, and is called non-oscillatory otherwise. III) Equation (1) is called oscillatory if the function

$$\tag{5 } \Delta \mathop{\rm Arg} X ( t) = \ \sum _ {j = 1 } ^ { k } \Delta \mathop{\rm Arg} \rho _ {j} ( t)$$

is unbounded on $( t _ {0} , \infty )$, and is called non-oscillatory otherwise. (In (5), the $\rho _ {j} ( t)$ are the eigen values of $X( t)$ of the first kind.) After equation (3) or (4) has been reduced to (2), equation (2) thus obtained will be oscillatory in the sense of III) if and only if equation (3) (or (4)) is oscillatory in the sense of definition I) (or II)). The following geometrical interpretation may be given to definition III). The group $\mathop{\rm Sp} ( k, R)$ of symplectic matrices $X$ is homeomorphic to the product of a connected and simply-connected topological space by the circle. The corresponding mapping may be so chosen that $\mathop{\rm exp} ( \sum _ {j=} 1 ^ {k} \mathop{\rm Arg} \rho _ {j} )$ is the projection of the matrix $X \in \mathop{\rm Sp} ( k, R)$ onto the circle (here the $\rho _ {j}$ are the eigen values of the first kind of $X$). Thus, equation (2) is oscillatory if, for $t \rightarrow \infty$, $X( t)$" winds unboundedly" in $\mathop{\rm Sp} ( k, R)$. (If $n = 1$, this group is homeomorphic to a "solid torus" , and the "winding" has a visual interpretation.) There exist various other definitions of the argument of a symplectic matrix, which correspond to other mappings of the group $\mathop{\rm Sp} ( k, R)$ to the circle, and which are equivalent to (5) in the sense that they all satisfy the inequality

$$\tag{6 } | \Delta \mathop{\rm Arg} ^ \prime X ( t) - \Delta \mathop{\rm Arg} \ X ( t) | < c$$

for any curve $X( t) \in \mathop{\rm Sp} ( k, R)$. Such arguments are, for example,

$$\mathop{\rm Arg} _ {1} X = \ \mathop{\rm Arg} \mathop{\rm det} ( U _ {1} - iV _ {1} ); \ \ \mathop{\rm Arg} _ {2} X = \ \mathop{\rm Arg} \mathop{\rm det} ( U _ {2} - iV _ {2} ),$$

where $U _ {j} , V _ {j}$ are $( k \times k )$- submatrices of the matrix

$$X = \left \| \begin{array}{cc} U _ {1} &U _ {2} \\ V _ {1} &V _ {2} \\ \end{array} \ \right \|$$

(cf. ). There exist various effectively-verifyable sufficient (and sometimes necessary and sufficient) conditions of oscillatority and non-oscillatority of equations (2), (3) and (4) (see, for example,  and the references to ).

## Linear Hamiltonian systems with periodic coefficients.

Let, in (2), $H( t + T) = H( t)$ almost-everywhere. The matrix $X( T)$ is called the monodromy matrix of equation (2), and its eigen values are called the multipliers of (2). Equation (2) (or the corresponding Hamiltonian $H( t)$) is called strongly stable if all its solutions are bounded on $( - \infty , + \infty )$, and this property is preserved under small deformations of the Hamiltonian in the sense of the norm $\| H \| = \int _ {0} ^ {T} | H( t) | dt$. Strong instability of equation (2) (of the Hamiltonian $H( t)$) is defined in an analogous manner. For equation (2) to be strongly stable it is necessary and sufficient that all its multipliers lie on the unit circle and that no two multipliers of different kinds coincide (in other words, that all root subspaces of $X ( T)$ be definite in the sense of the product $\langle x, y\rangle = i( Jx, y)$). Equation (2) is strongly unstable if and only if some of its multipliers lie outside the unit circle. Two samples of multipliers (taken with their kinds) which do not include coincident multipliers of different kinds are called equivalent if one sample can be continuously converted into the other so that multipliers of different kinds do not meet. The class of equivalent samples is called a multiplier type. In the case of stability there are $2 ^ {k}$ multiplier types. They may be denoted by symbols of the form $\mu = (+ , + , - , + \dots - )$ in which the plus and minus signs correspond to the kind of multipliers which are successively encountered when moving along the upper half-circle $| \rho | = 1$ from the point $\rho = + 1$ to the point $\rho = - 1$. Let $L = \{ H( t) \}$ be the Banach space of all Hamiltonians of the above type with norm $\| H \| = \int _ {0} ^ {T} | H( t) | dt$. The set $O \subset L$ of strongly-stable Hamiltonians breaks up in $L$ into a countable number of domains $O _ {n} ^ {( \mu ) }$, $n = 0, \pm 1, \pm 2 ,\dots$; $\mu = \mu _ {1} \dots \mu _ {2k}$. The domain $O _ {n} ^ {( \mu ) }$ is the set of all Hamiltonians to which correspond the multiplier type $\mu$ and the integer $n$, defined by the formula

$$\left . \Delta \mathop{\rm Arg} X ( t) \right | _ {0} ^ {T} = \ 2n \pi + \sum _ {j = 1 } ^ { k } \theta _ {j} ,$$

where $\theta _ {j} = \mathop{\rm arg} \rho _ {j} ( T)$ are the arguments of the multipliers of the first kind , . For $k = 1$ the set of strongly-unstable Hamiltonians breaks up into a countable number of domains; if $k > 1$ this set is connected. Various sufficient conditions for $H( t) \in O _ {n} ^ {( \mu ) }$ are known, , , . Many of these conditions are obtained from the following theorem: Let $H _ {1} ( t) \leq H _ {2} ( t)$; it then follows from the strong stability of the "segment" $H _ {s} ( t) = sH _ {1} ( t) +( 1 - s) H _ {2} ( t)$, $0 \leq s \leq 1$, that a Hamiltonian $H ( t)$ for which $H _ {1} ( t) \leq H( t) \leq H _ {2} ( t)$ is strongly stable. A similar theorem has also been proved for the infinite-dimensional case $( k = \infty )$, where $\{ x \}$ is a Hilbert space and, in (2), $J$ and $H ( t)$ are operators with special properties ; if $k = 1$ the theorem is valid for strongly-unstable Hamiltonians as well, .

## Parametric resonance.

Consider the equation

$$\tag{8 } J { \frac{dx}{dt} } = H _ {0} x ,$$

with a constant Hamiltonian $H _ {0}$ such that all the solutions of equation (8) are bounded. A frequency $\theta$ is said to be critical if for any $\delta > 0$ there exists a "perturbed" Hamiltonian equation

$$\tag{9 } J { \frac{dx}{dt} } = \ H ( \theta t) x,$$

where $H ( t + 2 \pi ) = H ( t)$, $\| H( t) - H _ {0} \| < \delta$, such that equation (9) has unbounded solutions ( $\theta$ may have any sign). The phenomenon when unbounded oscillations arise as a result of arbitrarily-small periodic perturbations of some of the system's parameters is called parametric resonance. Parametric resonance is of great importance in technology and in physics. It is more "dangerous" (or more "useful" , depending on the problem) than ordinary resonance since, unlike to the latter, the oscillations increase exponentially (and not polynomially), and the resonance frequencies are not discrete but fill intervals. The lengths of these intervals depend on the amplitude of the perturbation, and the intervals themselves contract to single points (which correspond to critical frequencies) if the amplitude of the perturbation tends to zero. Let $i \omega _ {1} \dots i \omega _ {k}$ be the eigen values of the first kind of the matrix $J ^ {-} 1 H _ {0}$( then $- i \omega _ {1} \dots - i \omega _ {k}$ are those of the second kind). Let $\omega _ {j} + \omega _ {h} \neq 0$( $j, h = 1 \dots k$). The critical frequencies are the numbers $\theta _ {jh} ^ {(} N) = ( \omega _ {j} + \omega _ {h} )/N$( $j, h = 1 \dots k$; $N = \pm 1, \pm 2 ,\dots$) and only these numbers . In (9), let $H ( \theta t) = H _ {0} + \epsilon H _ {1} ( \theta t)$, where $\epsilon$ is a small parameter and

$$J ^ {- 1 } H _ {0} f _ {j} = \ i \omega _ {j} f _ {j} \ \ ( j = \pm 1 \dots \pm k),\ \ \omega _ {j} = - \omega _ {j} ,$$

$$H _ {1} ( \tau ) = \sum _ { m } H ^ {( m) } e ^ {im \tau } .$$

The vector system $\{ f _ {j} \}$ may be chosen so that $\langle f _ {j} , f _ {h} \rangle = \delta _ {jh } \mathop{\rm sign} j$( $j = \pm 1 \dots \pm k$). In the "general case" the points $\{ \epsilon , \theta \}$ for which equation (9) with $H ( \theta t) = H _ {0} + \epsilon H _ {1} ( \theta t)$ is strongly unstable fill near the $\theta$- axis the domains $\Omega _ {1} ( \epsilon ) < \theta - \theta _ {jh} ^ {(} N) < \Omega _ {2} ( \epsilon )$, where $\Omega _ {1 2 } = \theta _ {jh} ^ {(} N) + \epsilon \mu _ {1 2 } + O ( \epsilon ^ {3/2} )$. The numbers $\mu _ {1}$, $\mu _ {2}$ can be simply expressed in terms of $H ^ {(} m)$ and $f _ {j}$( see, for example, ).

The magnitude $| ( H ^ {(} N) f _ {j} , f _ {-} h ) |$ is a characterization of the "degree of danger" of the critical frequency $\theta _ {jh} ^ {(} N)$: the higher its value, the wider the "unstability wedge" adjacent to the point $( 0, \theta _ {jh} ^ {(} N) )$ and the nearer to the $\theta$- axis the domain of $\alpha$- exponential growth of the solution with a small $\alpha > 0$( for more details see ). See also , ,  for other information.

Results similar to the above have been obtained for equations (1) with complex coefficients ( $H( t)$ is a Hermitian matrix function, $J ^ {*} = - J$, $\mathop{\rm det} J \neq 0$; see, for example, ). A more general system

$$Q ( t) { \frac{dy}{dt} } = \ \left [ S ( t) - { \frac{1}{2} } { \frac{dQ}{dt} } \right ] y,$$

where

$$Q ( t) ^ {*} = - Q ( t),\ \ S ( t) ^ {*} = S ( t),\ \ \mathop{\rm det} Q ( t) \neq 0,$$

$$Q ( t + T) = Q ( t),\ S ( t + T) = S ( t),$$

is considered in . It was found that the number of domains of stability is finite both in the real and in the complex case; a characterization of these domains was obtained in terms of the properties of the solutions of the corresponding equations.

A number of similar results were also obtained for operator equations (2) with bounded and unbounded operator coefficients in a Hilbert space , .

How to Cite This Entry:
Hamiltonian system, linear. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Hamiltonian_system,_linear&oldid=47171
This article was adapted from an original article by V.A. Yakubovich (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article