# Linear ordinary differential equation

An ordinary differential equation (cf. Differential equation, ordinary) that is linear in the unknown function of one independent variable and its derivatives, that is, an equation of the form

$$\tag{1 } x ^ {(} n) + a _ {1} ( t) x ^ {(} n- 1) + \dots + a _ {n} ( t) x = f ( t) ,$$

where $x ( t)$ is the unknown function and $a _ {i} ( t)$, $f ( t)$ are given functions; the number $n$ is called the order of equation (1) (below the general theory of linear ordinary differential equations is presented; for equations of the second order see also Linear ordinary differential equation of the second order).

1) If in (1) the functions $a _ {1} \dots a _ {n} , f$ are continuous on the interval $( a , b )$, then for any numbers $x _ {0} , x _ {0} ^ \prime \dots x _ {0} ^ {(} n- 1)$ and $t _ {0} \in ( a , b )$ there is a unique solution $x ( t)$ of (1) defined on the whole interval $( a , b )$ and satisfying the initial conditions

$$x ( t _ {0} ) = x _ {0} ,\ x ^ \prime ( t _ {0} ) = x _ {0} ^ \prime \dots x ^ {(} n- 1) ( t _ {0} ) = x _ {0} ^ {(} n- 1) .$$

The equation

$$\tag{2 } x ^ {(} n) + a _ {1} ( t) x ^ {(} n- 1) + \dots + a _ {n} ( t) x = 0$$

is called the homogeneous equation corresponding to the inhomogeneous equation (1). If $x ( t)$ is a solution of (2) and

$$x ( t _ {0} ) = x ^ \prime ( t _ {0} ) = \dots = \ x ^ {(} n- 1) ( t _ {0} ) = 0 ,$$

then $x ( t) \equiv 0$. If $x _ {1} ( t) \dots x _ {m} ( t)$ are solutions of (2), then any linear combination

$$C _ {1} x _ {1} ( t) + \dots + C _ {m} x _ {m} ( t)$$

is a solution of (2). If the $n$ functions

$$\tag{3 } x _ {1} ( t) \dots x _ {n} ( t)$$

are linearly independent solutions of (2), then for every solution $x ( t)$ of (2) there are constants $C _ {1} \dots C _ {n}$ such that

$$\tag{4 } x ( t) = C _ {1} x _ {1} ( t) + \dots + C _ {n} x _ {n} ( t) .$$

Thus, if (3) is a fundamental system of solutions of (2) (i.e. a system of $n$ linearly independent solutions of (2)), then its general solution is given by (4), where $C _ {1} \dots C _ {n}$ are arbitrary constants. For every non-singular $n \times n$ matrix $B = \| b _ {ij} \|$ and every $t _ {0} \in ( a , b )$ there is a fundamental system of solutions (3) of equation (2) such that

$$x _ {i} ^ {(} n- j) ( t _ {0} ) = b _ {ij} ,\ \ i , j = 1 \dots n .$$

For the functions (3) the determinant

$$W ( t) = \mathop{\rm det} \ \left \| \begin{array}{ccc} x _ {1} ( t) &\dots &x _ {n} ( t) \\ x _ {1} ^ \prime &\dots &x _ {n} ^ \prime ( t) \\ \dots &\dots &\dots \\ x _ {1} ^ {(} n- 1) ( t) &\dots &x _ {n} ^ {(} n- 1) ( t) \\ \end{array} \ \right \|$$

is called the Wronski determinant, or Wronskian. If (3) is a fundamental system of solutions of (2), then $W ( t) \neq 0$ for all $t \in ( a , b )$. If $W ( t _ {0} ) = 0$ for at least one point $t _ {0}$, then $W ( t) \equiv 0$ and the solutions (3) of equation (2) are linearly dependent in this case. For the Wronskian of the solutions (3) of equation (2) the Liouville–Ostrogradski formula holds:

$$W ( t) = W ( t _ {0} ) \ \mathop{\rm exp} \left ( - \int\limits _ {t _ {0} } ^ { t } a _ {1} ( \tau ) d \tau \right ) .$$

The general solution of (1) is the sum of the general solution of the homogeneous equation (2) and a particular solution $x _ {0} ( t)$ of the inhomogeneous equation (1), and is given by the formula

$$x ( t) = C _ {1} x _ {1} ( t) + \dots + C _ {n} x _ {n} ( t) + x _ {0} ( t) ,$$

where $x _ {1} ( t) \dots x _ {n} ( t)$ is a fundamental system of solutions of (2) and $C _ {1} \dots C _ {n}$ are arbitrary constants. If a fundamental system of solutions (3) of equation (2) is known, then a particular solution of the inhomogeneous equation (1) can be found by the method of variation of constants.

2) A system of linear ordinary differential equations of order $n$ is a system

$$\dot{x} _ {i} = \sum _ { j= } 1 ^ { n } a _ {ij} ( t) x _ {i} + b _ {i} ( t),\ \ i = 1 \dots n ,$$

or, in vector form,

$$\tag{5 } \dot{x} = A ( t) x + b ( t) ,$$

where $x ( t) \in \mathbf R ^ {n}$ is an unknown column vector, $A ( t)$ is a square matrix of order $n$ and $b ( t)$ is a given vector function. Suppose also that $A ( t)$ and $b ( t)$ are continuous on some interval $( a , b )$. In this case, for any $t _ {0} \in ( a , b )$ and $x _ {0} \in \mathbf R ^ {n}$ there is a unique solution $x ( t)$ of the system (5) defined on the whole interval $( a , b )$ and satisfying the initial condition $x ( t _ {0} ) = x _ {0}$.

The linear system

$$\tag{6 } \dot{x} = A ( t) x$$

is called the homogeneous system corresponding to the inhomogeneous system (5). If $x ( t)$ is a solution of (6) and $x ( t _ {0} ) = 0$, then $x ( t) \equiv 0$; if $x _ {1} ( t) \dots x _ {m} ( t)$ are solutions, then any linear combination

$$C _ {1} x _ {1} ( t) + \dots + C _ {m} x _ {m} ( t)$$

is a solution of (6); if $x _ {1} ( t) \dots x _ {m} ( t)$ are linearly independent solutions of (6), then the vectors $x _ {1} ( t) \dots x _ {m} ( t)$ are linearly independent for any $t \in ( a , b )$. If the $n$ vector functions

$$\tag{7 } x _ {1} ( t) \dots x _ {n} ( t)$$

form a fundamental system of solutions of (6), then for every solution $x ( t)$ of (6) there are constants $C _ {1} \dots C _ {n}$ such that

$$\tag{8 } x( t) = C _ {1} x _ {1} ( t) + \dots + C _ {n} x _ {n} ( t).$$

Thus, formula (8) gives the general solution of (6). For any $t _ {0} \in ( a , b )$ and any linearly independent vectors $a _ {1} \dots a _ {n} \in \mathbf R ^ {n}$ there is a fundamental system of solutions (7) of the system (6) such that

$$x _ {1} ( t _ {0} ) = a _ {1} \dots x _ {n} ( t _ {0} ) = a _ {n} .$$

For vector functions (7) that are solutions of (6), the determinant $W ( t)$ of the matrix

$$\tag{9 } X ( t) = \ \left \| \begin{array}{ccc} x _ {11} ( t) &\dots &x _ {n1} ( t) \\ \dots &\dots &\dots \\ x _ {1n} ( t) &\dots &x _ {nn} ( t) \\ \end{array} \ \right \| ,$$

where $x _ {ij} ( t)$ is the $j$- th component of the $i$- th solution, is called the Wronski determinant, or Wronskian. If (7) is a fundamental system of solutions of (6), then $W ( t) \neq 0$ for all $t \in ( a , b )$ and (9) is called a fundamental matrix. If the solutions (7) of the system (6) are linearly dependent for at least one point $t _ {0}$, then they are linearly dependent for any $t \in ( a , b )$, and in this case $W ( t) \equiv 0$. For the Wronskian of the solutions (7) of the system (6) Liouville's formula holds:

$$W = W ( t _ {0} ) \mathop{\rm exp} \left ( \int\limits _ { t _ 0 } ^ { t } \mathop{\rm Tr} ( A ( \tau ) ) d \tau \right ) ,$$

where $\mathop{\rm Tr} ( A ( \tau ) ) = a _ {11} ( \tau ) + \dots + a _ {nn} ( \tau )$ is the trace of the matrix $A ( \tau )$. The matrix (9) satisfies the matrix equation $\dot{X} = A ( t) X ( t)$. If $X ( t)$ is a fundamental matrix of the system (6), then for every other fundamental matrix $Y ( t)$ of this system there is a constant non-singular $n \times n$ matrix $C$ such that $Y ( t) = X ( t) C$. If $X ( t _ {0} ) = E$, where $E$ is the unit matrix, then the fundamental matrix $X ( t)$ is said to be normalized at the point $t _ {0}$ and the formula $x ( t) = X ( t) x _ {0}$ gives the solution of (6) satisfying the initial condition $x ( t _ {0} ) = x _ {0}$.

If the matrix $A ( t)$ commutes with its integral, then the fundamental matrix of (6) normalized at the point $t _ {0} \in ( a , b )$ is given by the formula

$$X ( t) = \mathop{\rm exp} \left ( \int\limits _ {t _ {0} } ^ { t } A ( \tau ) d \tau \right ) .$$

In particular, for a constant matrix $A$ the fundamental matrix normalized at the point $t _ {0}$ is given by the formula $X ( t) = \mathop{\rm exp} A ( t - t _ {0} )$. The general solution of (5) is the sum of the general solution of the homogeneous system (6) and a particular solution $x _ {0} ( t)$ of (5) and is given by the formula

$$x ( t) = C _ {1} x _ {1} ( t) + \dots + C _ {n} x _ {n} ( t) + x _ {0} ( t) ,$$

where $x _ {1} ( t) \dots x _ {n} ( t)$ is a fundamental system of solutions of (6) and $C _ {1} \dots C _ {n}$ are arbitrary constants. If a fundamental system of solutions (7) of the system (6) is known, then a particular solution of the inhomogeneous system (5) can be found by the method of variation of constants. If $X ( t)$ is a fundamental matrix of the system (6), then the formula

$$x ( t) = X ( t) X ^ {-} 1 ( t _ {0} ) x _ {0} + \int\limits _ {t _ {0} } ^ { t } X ( t) X ^ {-} 1 ( \tau ) b ( \tau ) d \tau$$

gives the solution of (5) satisfying the initial condition $x ( t _ {0} ) = x _ {0}$.

3) Suppose that in the system (5) and (6) $A ( t)$ and $b ( t)$ are continuous on a half-line $[ a , + \infty )$. All solutions of (5) are simultaneously either stable or unstable, so the system (5) is said to be stable (uniformly stable, asymptotically stable) if all its solutions are stable (respectively, uniformly stable, asymptotically stable, cf. Asymptotically-stable solution; Lyapunov stability). The system (5) is stable (uniformly stable, asymptotically stable) if and only if the system (6) is stable (respectively, uniformly stable, asymptotically stable). Therefore, in the investigation of questions on the stability of linear differential systems it suffices to consider only homogeneous systems.

The system (6) is stable if and only if all its solutions are bounded on the half-line $[ a , + \infty )$. The system (6) is asymptotically stable if and only if

$$\tag{10 } \lim\limits _ {t \rightarrow + \infty } \ x ( t) = 0$$

for all its solutions $x ( t)$. The latter condition is equivalent to (10) being satisfied for $n$ solutions $x _ {1} ( t) \dots x _ {n} ( t)$ of the system that form a fundamental system of solutions. An asymptotically-stable system (6) is asymptotically stable in the large.

A linear system with constant coefficients

$$\tag{11 } \dot{x} = A x$$

is stable if and only if all eigen values $\lambda _ {1} \dots \lambda _ {n}$ of $A$ have non-positive real parts (that is, $\mathop{\rm Re} \lambda _ {i} \leq 0$, $i = 1 \dots n$), and the eigen values with zero real part may have only simple elementary divisors. The system (11) is asymptotically stable if and only if all eigen values of $A$ have negative real parts.

4) The system

$$\tag{12 } \dot{y} = - A ^ {T} ( t) y ,$$

where $A ^ {T} ( t)$ is the transposed matrix of $A ( t)$, is called the adjoint system of the system (6). If $x ( t)$ and $y ( t)$ are arbitrary solutions of (6) and (12), respectively, then the scalar product

$$( x ( t) , y ( t) ) \equiv \textrm{ const } .$$

If $X ( t)$ and $Y ( t)$ are fundamental matrices of solutions of (6) and (12), respectively, then

$$Y ^ {T} ( t) X ( t) = C ,$$

where $C$ is a non-singular constant matrix.

5) The investigation of various special properties of linear systems, particularly the question of stability, is connected with the concept of the Lyapunov characteristic exponent of a solution and the first method in the theory of stability developed by A.M. Lyapunov (see Regular linear system; Reducible linear system; Lyapunov stability).

6) Two systems of the form (6) are said to be asymptotically equivalent if there is a one-to-one correspondence between their solutions $x _ {1} ( t)$ and $x _ {2} ( t)$ such that

$$\lim\limits _ {t \rightarrow \infty } \ ( x _ {1} ( t) - x _ {2} ( t) ) = 0 .$$

If the system (11) with a constant matrix $A$ is stable, then it is asymptotically equivalent to the system $\dot{x} = ( A + B ( t)) x$, where the matrix $B ( t)$ is continuous on $[ a , + \infty )$ and

$$\tag{13 } \int\limits _ { 0 } ^ \infty \| B ( t) \| dt < \infty .$$

If (13) is satisfied, the system $\dot{x} = B ( t) x$ is asymptotically equivalent to the system $\dot{x} = 0$.

Two systems of the form (11) with constant coefficients are said to be topologically equivalent if there is a homeomorphism $h : \mathbf R ^ {n} \rightarrow \mathbf R ^ {n}$ that takes oriented trajectories of one system into oriented trajectories of the other. If two square matrices $A$ and $B$ of order $n$ have the same number of eigen values with negative real part and have no eigen values with zero real part, then the systems $\dot{x} = A x$ and $\dot{x} = B x$ are topologically equivalent.

7) Suppose that in the system (6) the matrix $A ( t)$ is continuous and bounded on the whole real axis. The system (6) is said to have exponential dichotomy if the space $\mathbf R ^ {n}$ splits into a direct sum: $\mathbf R ^ {n} = \mathbf R ^ {n _ {1} } \oplus \mathbf R ^ {n _ {2} }$, $n _ {1} + n _ {2} = n$, so that for every solution $x ( t)$ with $x ( 0) \in \mathbf R ^ {n _ {1} }$ the inequality

$$\| x ( t) \| \geq c e ^ {k ( t - t _ {0} ) }$$

holds, and for every solution $x ( t)$ with $x ( 0) \in \mathbf R ^ {n _ {2} }$ the inequality

$$\| x ( t) \| \leq c ^ {-} 1 e ^ {- k ( t - t _ {0} ) }$$

holds for all $t _ {0} \in \mathbf R$ and $t \geq t _ {0}$, where $0 < c \leq 1$ and $k > 0$ are constants. For example, exponential dichotomy is present in a system (11) with constant matrix $A$ if $A$ has no eigen values with zero real part (such a system is said to be hyperbolic). If the vector function $b ( t)$ is bounded on the whole real axis, then a system (5) having exponential dichotomy has a unique solution that is bounded on the whole line $\mathbf R$.

How to Cite This Entry:
Linear ordinary differential equation. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Linear_ordinary_differential_equation&oldid=47659
This article was adapted from an original article by N.N. Ladis (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article