Linear system of differential equations with periodic coefficients
A system of $ n $
linear ordinary differential equations of the form
$$ \tag{1 } \left . \begin{array}{c} \frac{d x _ {1} }{d t } = \ \alpha _ {11} ( t) x _ {1} + \dots + \alpha _ {1n} ( t) x _ {n} , \\ {\dots \dots \dots \dots \dots } \\ \frac{d x _ {n} }{dt} = \alpha _ {n1} ( t) x _ {1} + \dots + \alpha _ {nn} ( t) x _ {n} , \\ \end{array} \right \} $$
where $ t $ is a real variable, $ \alpha _ {jh} ( t) $ and $ x _ {h} = x _ {h} ( t) $ are complexvalued functions, and
$$ \tag{2 } \alpha _ {jh} ( t + T ) = \alpha _ {jh} ( t) \ \ \textrm{ for any } j , h . $$
The number $ T > 0 $ is called the period of the coefficients of the system (1). It is convenient to write (1) as one vector equation
$$ \tag{3 } \frac{dx}{dt} = A ( t) x , $$
where
$$ x ^ {T} = ( x _ {1} \dots x _ {n} ) ,\ \ A ( t) = \ \alpha _ {jh} ( t) \ ,\ \ j , h = 1 \dots n . $$
It is assumed that the functions $ \alpha _ {jh} ( t) $ are defined for $ t \in \mathbf R $ and are measurable and Lebesgue integrable on $ [ 0 , T ] $, and that the equalities (2) are satisfied almosteverywhere, that is, $ A ( t + T ) = A ( t) $. A solution of (3) is a vector function $ x = x ( t) $ with absolutelycontinuous components such that (3) is satisfied almosteverywhere. Suppose that $ t _ {0} \in \mathbf R $ and $ a $ are an (arbitrarily) given number and vector. A solution $ x ( t) $ satisfying the condition $ x ( t _ {0} ) = a $ exists and is uniquely determined. A matrix $ X ( t) $ of order $ n $ with absolutelycontinuous entries is called the matrizant (or evolution matrix, or transition matrix, or Cauchy matrix) of (3) if almosteverywhere on $ \mathbf R $ one has
$$ \frac{dX}{dt} = A ( t) X $$
and $ X ( 0) = I $, where $ I $ is the unit $ n \times n $ matrix. The transition matrix $ X ( t) $ satisfies the relation
$$ X ( t + T ) = X ( t) X ( T) ,\ \ t \in \mathbf R . $$
The matrix $ X ( T) $ is called the monodromy matrix, and its eigen values $ \rho _ {j} $ are called the multipliers of (3). The equation
$$ \tag{4 } \mathop{\rm det} [ X ( T)  \rho I ] = 0 $$
for the multipliers $ \rho _ {j} $ is called the characteristic equation of equation (3) (or of the system (1)). To every eigen vector $ a ^ {(} 0) $ of the monodromy matrix with multiplier $ \rho _ {0} $ corresponds a solution $ x ^ {(} 0) ( t) = X ( t) a ^ {(} 0) $ of (3) satisfying the condition
$$ x ^ {(} 0) ( t + T ) = \rho _ {0} x ^ {(} 0) ( t) . $$
The Floquet–Lyapunov theorem holds: The transition matrix of (3) with $ T $ periodic matrix $ A ( t) $ can be represented in the form
$$ \tag{5 } X ( t) = F ( t) e ^ {tK} , $$
where $ K $ is a constant matrix and $ F ( t) $ is an absolutelycontinuous matrix function, periodic with period $ T $, nonsingular for all $ t \in \mathbf R $, and such that $ F ( 0) = I $. Conversely, if $ F ( t) $ and $ K $ are matrices with the given properties, then the matrix (5) is the transition matrix of an equation (3) with $ T $ periodic matrix $ A ( t) $. The matrix $ K $, called the indicator matrix, and the matrix function $ F ( t) $ in the representation (5) are not uniquely determined. In the case of real coefficients $ \alpha _ {jh} ( t) $ in (5), $ X ( t) $ is a real matrix, but $ F ( t) $ and $ K $ are complex matrices, generally speaking. For this case there is a refinement of the Floquet–Lyapunov theorem: The transition matrix of (3) with $ T $ periodic real matrix $ A ( t) $ can be represented in the form (5), where $ K $ is a constant real matrix and $ F ( t) $ is a real absolutelycontinuous matrix function, nonsingular for all $ t $, satisfying the relations
$$ F ( t + T ) = F ( t) L ,\ \ F ( 0) = I ,\ K L = L K , $$
where $ L $ is a real matrix such that
$$ L ^ {2} = I . $$
In particular, $ F ( t + 2 T ) = F ( t) $. Conversely, if $ F ( t) $, $ K $ and $ L $ are arbitrary matrices with the given properties, then (5) is the transition matrix of an equation (3) with a $ T $ periodic real matrix $ A ( t) $.
An immediate consequence of (5) is Floquet's theorem, which asserts that equation (3) has a fundamental system of solutions splitting into subsets, each of which has the form
$$ x ^ {(} 1) ( t) = e ^ {\lambda t } u _ {1} ( t) , $$
$$ x ^ {(} 2) ( t) = e ^ {\lambda t } [ t u _ {1} ( t) + u _ {2} ( t) ] , $$
$$ {\dots \dots \dots \dots \dots } $$
$$ x ^ {(} m) ( t) = e ^ {\lambda t } \left [ \frac{t ^ {m} 1 }{( m  1 ) ! } u _ {1} ( t) + \dots + t u _ {m} 1 ( t) + u _ {m} ( t) \right ] , $$
where the $ u _ {j} ( t) $ are absolutelycontinuous $ T $ periodic (generally speaking, complexvalued) vector functions. (The given subset of solutions corresponds to one $ ( m \times m ) $ cell of the Jordan form of $ K $.) If all elementary divisors of $ K $ are simple (in particular, if all roots of the characteristic equation (4) are simple), then there is a fundamental system of solutions of the form
$$ x ^ {(} j) ( t) = e ^ {\lambda _ {j} t } u _ {j} ( t) ,\ \ u _ {j} ( t + T ) = u _ {j} ( t) ,\ \ j = 1 \dots n . $$
Formula (5) implies that (3) is reducible (see Reducible linear system) to the equation
$$ \frac{dy}{dt} = K y $$
by means of the change of variable $ x = F ( t) y $( Lyapunov's theorem).
Let $ \rho _ {1} \dots \rho _ {n} $ be the multipliers of equation (3) and let $ K $ be an arbitrary indicator matrix, that is,
$$ \tag{6 } e ^ {TK} = X ( T) . $$
The eigen values $ \lambda _ {1} \dots \lambda _ {n} $ of $ K $ are called the characteristic exponents (cf. Characteristic exponent) of (3). From (6) one obtains $ e ^ {T \lambda _ {j} } = \rho _ {j} $, $ j = 1 \dots n $. The characteristic exponent $ \lambda $ can be defined as the complex number for which (3) has a solution that is representable in the form
$$ x ( t) = e ^ {\lambda t } u ( t) , $$
where $ u ( t) $ is a $ T $
periodic vectorvalued function. The main properties of the solutions in which one is usually interested in applications are determined by the characteristic exponents or multipliers of the given equation (see the Table).
<tbody> </tbody>

In applications, the coefficients of (1) often depend on parameters; in the parameter space one must distinguish the domains at whose points the solutions of (1) have desired properties (usually these are the first four properties mentioned in the Table, or the fact that $  x ( t)  \leq \textrm{ const } e ^ { \alpha t } $ with $ \alpha $ given). These problems thus reduce to the calculation or estimation of the characteristic exponents (multipliers) of (1).
The equation
$$ \tag{7 } \frac{dx}{dt} = A ( t) x + f ( t) , $$
where $ A ( t) $ and $ f ( t) $ are a measurable $ T $ periodic matrix function and vector function, respectively, that are Lebesgue integrable on $ [ 0 , T ] $( $ A ( t + T ) = A ( t) $, $ f ( t + T ) = f ( t) $ almosteverywhere), is called an "inhomogeneous linear ordinary differential equation with periodic coefficientsinhomogeneous linear ordinary differential equation with periodic coefficients" . If the corresponding homogeneous equation
$$ \tag{8 } \frac{dy}{dt} = A ( t) y $$
does not have $ T $ periodic solutions, then (7) has a unique $ T $ periodic solution. It can be determined by the formula
$$ x ( t) = [ I  R ( t , 0 ) ] ^ {} 1 \int\limits _ { 0 } ^ { T } R ( t , T  \tau ) f ( t  \tau ) d \tau , $$
where $ R ( t , s ) = Y ( t+ T ) Y ( t + T ) ^ {} 1 $ and $ Y ( t) $ is the transition matrix of the homogeneous equation (8), where $ R ( t+ T , s ) = R ( t , s ) $, $ \mathop{\rm det} [ I  R ( t , 0 ) ] \neq 0 $.
Suppose that (8) has $ d \geq 1 $ linearly independent $ T $ periodic solutions $ y _ {1} ( t) \dots y _ {d} ( t) $. Then the adjoint equation
$$ \frac{dz}{dt} =  A ( t) ^ {*} z $$
also has $ d $ linearly independent $ T $ periodic solutions, $ z _ {1} ( t) \dots z _ {d} ( t) $. The inhomogeneous equation (7) has a $ T $ periodic solution if and only if that the orthogonality relations
$$ \tag{9 } \int\limits _ { 0 } ^ { T } ( f ( t) , z _ {j} ( t) ) dt = 0 ,\ j = 1 \dots d , $$
hold. If so, an arbitrary $ T $ periodic solution of (7) has the form
$$ x ( t) = x ^ {( 0 ) } ( t) + \gamma _ {1} y _ {1} ( t) + \dots + \gamma _ {d} y _ {d} ( t) , $$
where $ \gamma _ {1} \dots \gamma _ {d} $ are arbitrary numbers and $ x ^ {(} 0) ( t) $ is a $ T $ periodic solution of (7). Under the additional conditions
$$ \int\limits _ { 0 } ^ { T } ( x ( t) , y _ {j} ( t) ) dt = 0 ,\ j = 1 \dots d , $$
the $ T $ periodic solution $ x ( t) $ is determined uniquely; moreover, there is a constant $ \theta > 0 $, independent of $ f ( t) $, such that
$$  x ( t)  \leq \theta \left ( \int\limits _ { 0 } ^ { T }  f ( s)  ^ {2} ds \right ) ^ {1/2} ,\ t \in [ 0 , T ] . $$
Suppose one is given an equation
$$ \tag{10 } \frac{dx}{dt} = A ( t , \epsilon ) x $$
with a matrix coefficient that holomorphically depends on a complex "small" parameter $ \epsilon $:
$$ \tag{11 } A ( t , \epsilon ) = A _ {0} ( t) + \epsilon A _ {1} ( t) + \epsilon ^ {2} A _ {2} ( t) + \dots . $$
Suppose that for $  \epsilon  < \epsilon _ {0} $ the series
$$ \ A _ {0} ( \cdot ) \ + \epsilon \ A _ {1} ( \cdot ) \ + \epsilon ^ {2} \ A _ {2} ( \cdot ) \ + \dots $$
converges, where
$$ \ A _ {j} ( \cdot ) \ = \int\limits _ { 0 } ^ { T }  A _ {j} ( t)  dt , $$
which guarantees the (componentwise) convergence of the series (11) for $  \epsilon  < \epsilon _ {0} $ in the space $ L ( 0 , T ) $. Then the transition matrix $ X ( t , \epsilon ) $ of (10) for fixed $ t \in [ 0 , T ] $ is an analytic function of $ \epsilon $ for $  \epsilon  < \epsilon _ {0} $. Let $ A _ {0} ( t) = C $ be a constant matrix with eigen values $ \lambda _ {j} $, $ j = 1 \dots n $. Let $ \rho _ {j} ( \epsilon ) $ be the multipliers of equation (10), $ \rho _ {j} ( 0) = \mathop{\rm exp} ( \lambda _ {j} T ) $. If $ \rho _ {h _ {1} } ( 0) = \dots = \rho _ {h _ {r} } ( 0) = \mathop{\rm exp} ( \alpha ^ {(} 0) T) $ is a multiplier of multiplicity $ r $, then
$$ \tag{12 } \lambda _ {h} = \alpha ^ {(} 0) + \frac{2 \pi i }{T} m _ {h} ,\ h = h _ {1} \dots h _ {r} , $$
where $ m _ {h} $ are integers. If simple elementary divisors of the monodromy matrix correspond to this multiplier, or, in other words, if to each $ \lambda _ {h} $, $ h = h _ {1} \dots h _ {r} $, correspond simple elementary divisors of the matrix $ C $( for example, if all the numbers $ \lambda _ {h} $ are distinct), then $ \alpha ^ {(} 0) $ is called an $ r $ fold characteristic exponent (of equation (10) with $ \epsilon = 0 $) of simple type. It turns out that the corresponding $ r $ characteristic exponents of (10) with small $ \epsilon > 0 $ can be very easily computed to a first approximation. Namely, let $ a _ {h} $ and $ b _ {h} $ be the corresponding normalized eigen vectors of the matrices $ C $ and $ C ^ {*} $;
$$ C a _ {h} = \lambda _ {h} a _ {h} ,\ C ^ {*} b _ {h} = \overline \lambda \; _ {h} b _ {h} , $$
$$ ( a _ {j} , b _ {h} ) = \delta _ {jh} ,\ j , h = h _ {1} \dots h _ {r} ; $$
let
$$ A _ {1} ( t) \sim \sum _ {m =  \infty } ^ { {+ } \infty } A _ {1} ^ {(} m) \mathop{\rm exp} \left ( \frac{2 \pi i mt }{T} \right ) $$
be the Fourier series of $ A _ {1} ( t) $, and let
$$ \sigma _ {jh} = ( A _ {1} ^ {( m _ {h}  m _ {j} ) } a _ {j} , b _ {h} ) ,\ j , h = h _ {1} \dots h _ {r} , $$
where $ m _ {j} $ are the numbers from (12). Then for the corresponding $ r $ characteristic exponents $ \alpha _ {h} ( \epsilon ) $, $ h = h _ {1} \dots h _ {r} $, of (10), which become $ \alpha ^ {(} 0) $ for $ \epsilon = 0 $, one has series expansions in fractional powers of $ \epsilon $, starting with terms of the first order:
$$ \tag{13 } \alpha _ {h} ( \epsilon ) = \alpha ^ {(} 0) + \beta _ {h} \epsilon + O ( \epsilon ^ {1 + 1 / q _ {h} } ) ,\ h = h _ {1} \dots h _ {r} . $$
Here the $ \beta _ {h} $ are the roots (written as many times as their multiplicity) of the equation
$$ \mathop{\rm det} \ \sigma _ {jh}  \beta \delta _ {jh} \ = 0 $$
and $ q _ {h} $ are natural numbers equal to the multiplicities of the corresponding $ \beta _ {h} $( $ \delta _ {jj} = 1 $, $ \delta _ {jh} = 0 $ for $ j \neq h $). If the root $ \beta _ {h} $ is simple, then $ q _ {h} = 1 $ and the corresponding function $ \alpha _ {h} ( \epsilon ) $ is analytic for $ \epsilon = 0 $. From (13) it follows that cases are possible in which the "unperturbed" (that is, with $ \epsilon = 0 $) system is stable (all the $ \lambda _ {j} $ are purely imaginary and simple elementary divisors correspond to them), but the "perturbed" system (small $ \epsilon \neq 0 $) is unstable ( $ \mathop{\rm Re} \beta _ {h} > 0 $ for at least one $ \beta _ {h} $). This phenomenon of stability loss for an arbitrary small periodic change of parameters (with time) is called parametric resonance. Similar but more complicated formulas hold for characteristic exponents of nonsimple type.
Let $ \rho ^ {(} 1) \dots \rho ^ {(} q) $ be the distinct multipliers of equation (3) and let $ n _ {1} \dots n _ {q} $ be their multiplicities, where $ n _ {1} + \dots + n _ {q} = n $. Suppose that the points $ \rho ^ {(} j) $ on the complex $ \zeta $ plane are surrounded by nonintersecting discs $  \zeta  \rho ^ {(} j)  \leq R _ {j} $ and that a cut, not intersecting these discs, is drawn from the point $ \zeta = 0 $ to the point $ \zeta = \infty $. Suppose that with each multiplier $ \rho ^ {(} j) $ is associated an arbitrary integer $ m _ {j} $ and that $ U = X ( T , \epsilon ) $ is the transition matrix of (10). The branches of the logarithm $ ( \mathop{\rm ln} \zeta ) _ {m} $ are determined by means of the cut. The matrix $ \mathop{\rm ln} U $( "matrix logarithmmatrix logarithm" ) can be defined by the formula
$$ \tag{14 } \mathop{\rm ln} U = \frac{1}{2 \pi i } \sum _ { j= } 1 ^ { q } \int\limits _ {\Gamma _ {j} } ( \zeta I  U ) ^ {} 1 ( \mathop{\rm ln} \zeta ) _ {m _ {j} } d \zeta , $$
where $ \Gamma _ {j} $ is the circle $  \zeta  \rho ^ {(} j)  = R _ {j} $. The set of numbers $ m _ {1} \dots m _ {q} $ determines a branch of the matrix logarithm. Also, $ \mathop{\rm exp} ( \mathop{\rm ln} U ) = U $ for small $ \epsilon $. Generally speaking, formula (14) for all possible $ m _ {1} \dots m _ {q} $ does not cover all the values of the matrix logarithm, that is, all solutions $ Z $ of the equation $ \mathop{\rm exp} Z = U $. However, the solution given by (14) has the important property of holomorphy: The entries of the matrix $ \mathop{\rm ln} U $ in (14) are holomorphic functions of the entries of $ U $. For equation (10), formula (5) takes the form
$$ \tag{15 } X ( t , \epsilon ) = F ( t , \epsilon ) \mathop{\rm exp} [ tK ( \epsilon ) ] , $$
where $ F ( t+ T , \epsilon ) = F ( t , \epsilon ) $, $ K ( \epsilon ) = T ^ {} 1 \mathop{\rm ln} X ( T , \epsilon ) $. If $ \mathop{\rm ln} X ( T , \epsilon ) $ is determined in accordance with (14), then
$$ \tag{16 } \left . \begin{array}{c} K ( \epsilon ) = K _ {0} + \epsilon K _ {1} + \dots , \\ F ( t , \epsilon ) = F _ {0} ( t) + \epsilon F _ {1} ( t) + \dots \\ \end{array} \right \} $$
are series that converge for small $  \epsilon  $. The main information about the behaviour of the solutions as $ t \rightarrow + \infty $ which is usually of interest in applications is contained in the indicator matrix $ K ( \epsilon ) $. Below a method for the asymptotic integration of (10) is given, that is, a method for successively determining the coefficients $ K _ {j} $ and $ F _ {j} ( t) $ in (16).
Suppose that $ A _ {0} ( t) \equiv C $ in (11). Although $ X ( t , 0 ) = \mathop{\rm exp} ( tC ) $, generally speaking there is no branch of the matrix logarithm such that the matrix $ K ( \epsilon ) $ is analytic for $ \epsilon = 0 $ and $ K ( 0) = C $. This branch of the logarithm will exist in the socalled nonresonance case, when among the eigen values $ \lambda _ {j} $ of $ C $ there are no numbers for which
$$ \lambda _ {j}  \lambda _ {h} = \frac{2 \pi m i }{T} \neq 0 $$
( $ m $ is an integer). In the resonance case (when such eigen values exist) equation (10) reduces by a suitable change of variable $ x = P ( t) y $, where $ P ( t+ T ) = P ( t) $, to an analogous equation for which the nonresonance case holds. The matrix $ P ( t) $ can be determined from the matrix $ C $.
In (16), in the nonresonance case $ K _ {0} = C $, $ F _ {0} ( t) \equiv I $, and the matrices $ F _ {j} ( t) $, $ K _ {j} $, $ j = 1 , 2 \dots $ are found from the equation
$$ \frac{dF}{dt} = [ C + \epsilon A _ {1} ( t) + \epsilon ^ {2} A _ {2} ( t) + \dots ] F ( t , \epsilon )  F ( t , \epsilon ) K ( \epsilon ) , $$
after equating coefficients at the same powers of $ \epsilon $ in this equation. To determine $ Z ( t) = F _ {j} ( t) $ and $ L = K _ {j} $ one obtains a matrix equation of the form
$$ \tag{17 } \frac{dZ}{dt} = CZ  ZC + \Phi ( t)  L , $$
where $ \Phi ( t+ T ) = \Phi ( t) $. The matrices $ Z ( t) $ and $ L $ are found, and moreover uniquely (the nonresonance case), from (17) and the periodicity condition $ Z ( t+ T ) = Z ( t) $.
For special cases of the system (1) see Hamiltonian system, linear and Hill equation.
References
[1]  I.Z. Shtokalo, "Linear differential equations with variable coefficients: criteria of stability and unstability of their solutions" , Hindushtan Publ. Comp. (1961) (Translated from Russian) 
[2]  N.P. Erugin, "Linear systems of ordinary differential equations with periodic and quasiperiodic coefficients" , Acad. Press (1966) (Translated from Russian) 
[3]  V.A. Yakubovich, V.M. Starzhinskii, "Linear differential equations with periodic coefficients" , Wiley (1975) (Translated from Russian) 
Comments
References
[a1]  R.W. Brockett, "Finite dimensional linear systems" , Wiley (1970) 
[a2]  J.K. Hale, "Ordinary differential equations" , Wiley (1969) 
Linear system of differential equations with periodic coefficients. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Linear_system_of_differential_equations_with_periodic_coefficients&oldid=47666