Namespaces
Variants
Actions

Difference between revisions of "Differential equation, ordinary"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m
 
(8 intermediate revisions by 4 users not shown)
Line 1: Line 1:
 +
<!--
 +
d0319101.png
 +
$#A+1 = 177 n = 0
 +
$#C+1 = 177 : ~/encyclopedia/old_files/data/D031/D.0301910 Differential equation, ordinary
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
An equation with a function in one independent variable as unknown, containing not only the unknown function itself, but also its derivatives of various orders.
 
An equation with a function in one independent variable as unknown, containing not only the unknown function itself, but also its derivatives of various orders.
  
The term "differential equations" was was proposed in 1676 by G. Leibniz. The first studies of these equations were carried out in the late 17th century in the context of certain problems in mechanics and geometry.
+
The term "differential equations" was proposed in 1676 by G. Leibniz. The first studies of these equations were carried out in the late 17th century in the context of certain problems in mechanics and geometry.
  
Ordinary differential equations have important applications and are a powerful tool in the study of many problems in the natural sciences and in technology; they are extensively employed in mechanics, astronomy, physics, and in many problems of chemistry and biology. The reason for it is the fact that objective laws governing certain phenomena (processes) can be written as ordinary differential equations, so that the equations themselves are a quantitative expression of these laws. For instance, Newton's laws of mechanics make it possible to reduce the description of the motion of mass points or solid bodies to solving ordinary differential equations. The computation of radiotechnical circuits or satellite trajectories, studies of the stability of a plane in flight, and explaining the course of chemical reactions are all carried out by studying and solving ordinary differential equations. The most interesting and most important applications of these equations are in the theory of oscillations (cf. [[Oscillations, theory of|Oscillations, theory of]]) and in [[Automatic control theory|automatic control theory]]. Applied problems in turn produce new formulations of problems in the theory of ordinary differential equations; the mathematical theory of optimal control (cf. [[Optimal control, mathematical theory of|Optimal control, mathematical theory of]]) in fact arose in this manner.
+
Ordinary differential equations have important applications and are a powerful tool in the study of many problems in the natural sciences and in technology; they are extensively employed in mechanics, astronomy, physics, and in many problems of chemistry and biology. The reason for this is the fact that objective laws governing certain phenomena (processes) can be written as ordinary differential equations, so that the equations themselves are a quantitative expression of these laws. For instance, Newton's laws of mechanics make it possible to reduce the description of the motion of mass points or solid bodies to solving ordinary differential equations. The computation of radiotechnical circuits or satellite trajectories, studies of the stability of a plane in flight, and explaining the course of chemical reactions are all carried out by studying and solving ordinary differential equations. The most interesting and most important applications of these equations are in the theory of oscillations (cf. [[Oscillations, theory of|Oscillations, theory of]]) and in [[Automatic control theory|automatic control theory]]. Applied problems in turn produce new formulations of problems in the theory of ordinary differential equations; the mathematical theory of optimal control (cf. [[Optimal control, mathematical theory of|Optimal control, mathematical theory of]]) in fact arose in this manner.
  
In what follows the independent variable is denoted by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d0319101.png" />, the unknown functions by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d0319102.png" />, etc., while the derivatives of these functions with respect to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d0319103.png" /> will be denoted by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d0319104.png" />, etc.
+
In what follows the independent variable is denoted by $  t $,  
 +
the unknown functions by $  x , y , z $,  
 +
etc., while the derivatives of these functions with respect to $  t $
 +
will be denoted by $  \dot{x} , \ddot{x} , \dots, x ^ {( n) } $,  
 +
etc.
  
The simplest ordinary differential equation is already encountered in analysis: The problem of finding the primitive function of a given continuous function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d0319105.png" /> amounts to finding an unknown function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d0319106.png" /> which satisfies the equation
+
The simplest ordinary differential equation is already encountered in analysis: The problem of finding the primitive function of a given continuous function $  f ( t) $
 +
amounts to finding an unknown function $  x ( t) $
 +
which satisfies the equation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d0319107.png" /></td> <td valign="top" style="width:5%;text-align:right;">(1)</td></tr></table>
+
$$ \tag{1 }
 +
\dot{x}  = f ( t) .
 +
$$
  
 
In order to prove that this equation is solvable, a special apparatus had to be developed — the theory of the [[Riemann integral|Riemann integral]].
 
In order to prove that this equation is solvable, a special apparatus had to be developed — the theory of the [[Riemann integral|Riemann integral]].
Line 15: Line 35:
 
A natural generalization of equation (1) is an ordinary differential equation of the first order, solved with respect to the derivative:
 
A natural generalization of equation (1) is an ordinary differential equation of the first order, solved with respect to the derivative:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d0319108.png" /></td> <td valign="top" style="width:5%;text-align:right;">(2)</td></tr></table>
+
$$ \tag{2 }
 +
\dot{x} ( t)  = f ( t , x ) ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d0319109.png" /> is a known function, defined in a certain region of the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191010.png" />-plane. Many practical problems can be reduced to the solution (or, as is often said, the integration) of this equation. A solution of the ordinary differential equation (2) is a function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191011.png" /> defined and differentiable on some interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191012.png" /> and satisfying the conditions
+
where $  f ( t , x ) $
 +
is a known function, defined in a certain region of the $  ( t , x ) $-plane. Many practical problems can be reduced to the solution (or, as is often said, the integration) of this equation. A solution of the ordinary differential equation (2) is a function $  x ( t) $
 +
defined and differentiable on some interval $  I $
 +
and satisfying the conditions
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191013.png" /></td> </tr></table>
+
$$
 +
( t , x ( t) )  \in  D ,\  t \in I ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191014.png" /></td> </tr></table>
+
$$
 +
\dot{x} ( t)  = f ( t , x ( t) ) ,\  t \in I .
 +
$$
  
The solution of (2) may be geometrically represented in the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191015.png" />-plane as a curve with equation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191016.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191017.png" />. This curve is known as an [[Integral curve|integral curve]], with a tangent at every point, and is totally contained in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191018.png" />. The geometrical interpretation of equation (2) itself is as a field of directions in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191019.png" />, obtained by drawing a segment <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191020.png" /> of small length with angular coefficient <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191021.png" /> through each point <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191022.png" />. Any integral curve <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191023.png" /> at each of its points is tangent to the segment <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191024.png" />.
+
The solution of (2) may be geometrically represented in the $  ( t , x ) $-plane as a curve with equation $  x = x ( t) $,  
 +
$  t \in I $.  
 +
This curve is known as an [[Integral curve|integral curve]], with a tangent at every point, and is totally contained in $  D $.  
 +
The geometrical interpretation of equation (2) itself is as a field of directions in $  D $,  
 +
obtained by drawing a segment $  l _ {t , x }  $
 +
of small length with angular coefficient $  f ( t , x ) $
 +
through each point $  ( t , x ) \in D $.  
 +
Any integral curve $  x = x ( t) $
 +
at each of its points is tangent to the segment $  l _ {t , x ( t) }  $.
  
The existence theorem answers the question of the existence of a solution of equation (2): If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191025.png" /> (i.e. is continuous in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191026.png" />), then at least one continuously-differentiable integral curve of equation (2) passes through any point <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191027.png" />, and each such curve may be extended in both directions up to the boundary of any closed subregion lying completely in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191028.png" /> and containing the point <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191029.png" />. In other words, for any point <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191030.png" /> it is possible to find at least one non-extendable solution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191031.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191032.png" />, such that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191033.png" /> (i.e. <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191034.png" /> is continuous in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191035.png" /> together with its derivative <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191036.png" />),
+
The existence theorem answers the question of the existence of a solution of equation (2): If $  f ( t , x ) \in C( D) $ (i.e. is continuous in $  D $),  
 +
then at least one continuously-differentiable integral curve of equation (2) passes through any point $  ( t _ {0} , x _ {0} ) \in D $,  
 +
and each such curve may be extended in both directions up to the boundary of any closed subregion lying completely in $  D $
 +
and containing the point $  ( t _ {0} , x _ {0} ) $.  
 +
In other words, for any point $  ( t _ {0} , x _ {0} ) \in D $
 +
it is possible to find at least one non-extendable solution $  x = x ( t) $,  
 +
$  t \in I $,  
 +
such that $  x ( t) \in C  ^ {1} ( I) $ (i.e. $  x $
 +
is continuous in $  I $
 +
together with its derivative $  \dot{x} $),
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191037.png" /></td> <td valign="top" style="width:5%;text-align:right;">(3)</td></tr></table>
+
$$ \tag{3 }
 +
x ( t _ {0} )  = x _ {0} ,
 +
$$
  
and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191038.png" /> tends to the boundary of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191039.png" /> as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191040.png" /> tends to the right or left end of the interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191041.png" />.
+
and $  x ( t) $
 +
tends to the boundary of $  D $
 +
as $  t $
 +
tends to the right or left end of the interval $  I $.
  
A very important theoretical problem is to clarify the assumptions to be made concerning the right-hand side of an ordinary differential equation and the additional conditions to be imposed on the equation in order that it has a unique solution. The following existence and uniqueness theorem is valid: If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191042.png" /> satisfies a [[Lipschitz condition|Lipschitz condition]] with respect to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191043.png" /> in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191044.png" /> and if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191045.png" />, then equation (2) has a unique, non-extendable solution satisfying condition (3). In particular, if two solutions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191046.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191047.png" />, and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191048.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191049.png" />, of such an equation (2) coincide for at least one value <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191050.png" />, i.e. <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191051.png" />, then
+
A very important theoretical problem is to clarify the assumptions to be made concerning the right-hand side of an ordinary differential equation and the additional conditions to be imposed on the equation in order that it has a unique solution. The following existence and uniqueness theorem is valid: If $  f ( t , x ) \in C( D) $
 +
satisfies a [[Lipschitz condition|Lipschitz condition]] with respect to $  x $
 +
in $  D $
 +
and if $  ( t _ {0} , x _ {0} ) \in D $,  
 +
then equation (2) has a unique, non-extendable solution satisfying condition (3). In particular, if two solutions $  x _ {1} ( t) $,  
 +
$  t \in I _ {1} $,  
 +
and $  x _ {2} ( t) $,  
 +
$  t \in I _ {2} $,  
 +
of such an equation (2) coincide for at least one value $  t = t _ {0} $,  
 +
i.e. $  x _ {1} ( t _ {0} ) = x _ {2} ( t _ {0} ) $,  
 +
then
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191052.png" /></td> </tr></table>
+
$$
 +
x _ {1} ( t)  = x _ {2} ( t) ,\  t \in I _ {1} \cap I _ {2} .
 +
$$
  
The geometrical content of this theorem is that the entire region <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191053.png" /> is covered by integral curves of equation (2), with no intersections between any two curves. Unique solutions may also be obtained under weaker assumptions regarding the function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191054.png" /> [[#References|[6]]].
+
The geometrical content of this theorem is that the entire region $  D $
 +
is covered by integral curves of equation (2), with no intersections between any two curves. Unique solutions may also be obtained under weaker assumptions regarding the function $  f ( t , x ) $ [[#References|[6]]].
  
The relation (3) is known as an initial condition. The numbers <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191055.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191056.png" /> are called initial values for the solution of equation (2), while the point <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191057.png" /> is called the initial point corresponding to the integral curve. The task of finding the solution of this equation satisfying initial condition (3) (or, in other words, with initial values <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191058.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191059.png" />) is known as the [[Cauchy problem|Cauchy problem]] or the initial value problem. The theorem just given provides sufficient conditions for the unique solvability of the Cauchy problem (2), (3).
+
The relation (3) is known as an initial condition. The numbers $  t _ {0} $
 +
and $  x _ {0} $
 +
are called initial values for the solution of equation (2), while the point $  ( t _ {0} , x _ {0} ) $
 +
is called the initial point corresponding to the integral curve. The task of finding the solution of this equation satisfying initial condition (3) (or, in other words, with initial values $  t _ {0} $,  
 +
$  x _ {0} $)  
 +
is known as the [[Cauchy problem|Cauchy problem]] or the initial value problem. The theorem just given provides sufficient conditions for the unique solvability of the Cauchy problem (2), (3).
  
Applied problems often involve systems of ordinary differential equations, containing several unknown functions of the same variable and their derivatives. A natural generalization of equation (2) is the normal form of a system of differential equations of order <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191060.png" />:
+
Applied problems often involve systems of ordinary differential equations, containing several unknown functions of the same variable and their derivatives. A natural generalization of equation (2) is the normal form of a system of differential equations of order $  n $:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191061.png" /></td> <td valign="top" style="width:5%;text-align:right;">(4)</td></tr></table>
+
$$ \tag{4 }
 +
{\dot{x} } {}  ^ {i}  = f ^ { i } ( t , x  ^ {1}, \dots,
 +
x  ^ {n} ) ,\  i = 1, \dots, n ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191062.png" /> are unknown functions of the variable <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191063.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191064.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191065.png" />, are given functions in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191066.png" /> variables. Writing
+
where $  x  ^ {1}, \dots, x  ^ {n} $
 +
are unknown functions of the variable $  t $
 +
and $  f ^ { i } $,
 +
$  i = 1, \dots, n $,  
 +
are given functions in $  n + 1 $
 +
variables. Writing
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191067.png" /></td> </tr></table>
+
$$
 +
\mathbf x  = ( x  ^ {1}, \dots, x  ^ {n} ) ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191068.png" /></td> </tr></table>
+
$$
 +
\mathbf f ( t , \mathbf x )  = ( f ^ { 1 } ( t ,
 +
\mathbf x ), \dots, f ^ { n } ( t , \mathbf x ) ) ,
 +
$$
  
 
the system (4) takes the vector form:
 
the system (4) takes the vector form:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191069.png" /></td> <td valign="top" style="width:5%;text-align:right;">(5)</td></tr></table>
+
$$ \tag{5 }
 +
\dot{\mathbf x}  = \mathbf f ( t , \mathbf x ) .
 +
$$
  
 
The vector function
 
The vector function
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191070.png" /></td> <td valign="top" style="width:5%;text-align:right;">(6)</td></tr></table>
+
$$ \tag{6 }
 +
\mathbf x  = \mathbf x ( t)  = ( x  ^ {1} ( t), \dots, x  ^ {n} ( t) ) ,
 +
\  t \in I ,
 +
$$
  
is a solution of the system (4) or of the vector equation (5). Each solution can be represented in the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191071.png" />-dimensional space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191072.png" /> as an integral curve — the graph of the vector function (6).
+
is a solution of the system (4) or of the vector equation (5). Each solution can be represented in the $  ( n + 1 ) $-dimensional space $  t , x  ^ {1}, \dots, x  ^ {n} $
 +
as an integral curve — the graph of the vector function (6).
  
 
The Cauchy problem for equation (5) is to find the solution satisfying the initial conditions
 
The Cauchy problem for equation (5) is to find the solution satisfying the initial conditions
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191073.png" /></td> </tr></table>
+
$$
 +
x  ^ {1} ( t _ {0} ) = x _ {0}  ^ {1}, \dots,
 +
x  ^ {n} ( t _ {0} ) = x _ {0}  ^ {n} ,
 +
$$
  
 
or
 
or
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191074.png" /></td> <td valign="top" style="width:5%;text-align:right;">(7)</td></tr></table>
+
$$ \tag{7 }
 +
\mathbf x ( t _ {0} )  = \mathbf x _ {0} .
 +
$$
  
 
The solution of the Cauchy problem (5), (7) is conveniently written as
 
The solution of the Cauchy problem (5), (7) is conveniently written as
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191075.png" /></td> <td valign="top" style="width:5%;text-align:right;">(8)</td></tr></table>
+
$$ \tag{8 }
 +
\mathbf x  = \mathbf x ( t , t _ {0} , \mathbf x _ {0} ) ,\  t \in I .
 +
$$
  
 
The existence and uniqueness theorem for equation (5) is formulated as for equation (2).
 
The existence and uniqueness theorem for equation (5) is formulated as for equation (2).
  
Very general systems of ordinary differential equations (solved with respect to the leading derivatives of all unknown functions) are reducible to normal systems. An important special class of systems (5) are linear systems of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191076.png" /> (coupled) ordinary differential equations of the first order:
+
Very general systems of ordinary differential equations (solved with respect to the leading derivatives of all unknown functions) are reducible to normal systems. An important special class of systems (5) are linear systems of $  n $ (coupled) ordinary differential equations of the first order:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191077.png" /></td> </tr></table>
+
$$
 +
\dot{\mathbf x}  = A ( t) \mathbf x + \mathbf F ( t) ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191078.png" /> is an <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191079.png" />-dimensional matrix.
+
where $  A ( t) $
 +
is an $  ( n \times n) $-dimensional matrix.
  
 
Of major importance in applications and in the theory of ordinary differential equations are autonomous systems of ordinary differential equations (cf. [[Autonomous system|Autonomous system]]):
 
Of major importance in applications and in the theory of ordinary differential equations are autonomous systems of ordinary differential equations (cf. [[Autonomous system|Autonomous system]]):
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191080.png" /></td> <td valign="top" style="width:5%;text-align:right;">(9)</td></tr></table>
+
$$ \tag{9 }
 +
\dot{\mathbf x}  = \mathbf f ( \mathbf x ) ,
 +
$$
  
i.e. normal systems whose right-hand side does not explicitly depend on the variable <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191081.png" />. In such a case equation (6) is conveniently regarded as a parametric representation of a curve, by regarding the solution as the phase trajectory in the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191082.png" />-dimensional phase space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191083.png" />. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191084.png" /> is a solution of the system (9), the function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191085.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191086.png" /> is an arbitrary constant, will also satisfy (9).
+
i.e. normal systems whose right-hand side does not explicitly depend on the variable $  t $.  
 +
In such a case equation (6) is conveniently regarded as a parametric representation of a curve, by regarding the solution as the phase trajectory in the $  n $-dimensional phase space $  x  ^ {1}, \dots, x  ^ {n} $.  
 +
If $  \mathbf x = \mathbf x ( t) $
 +
is a solution of the system (9), the function $  \mathbf x = \mathbf x ( t + c ) $,  
 +
where $  c $
 +
is an arbitrary constant, will also satisfy (9).
  
Another generalization of equation (2) is an ordinary differential equation of order <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191087.png" />, solved with respect to its leading derivative:
+
Another generalization of equation (2) is an ordinary differential equation of order $  n $,  
 +
solved with respect to its leading derivative:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191088.png" /></td> <td valign="top" style="width:5%;text-align:right;">(10)</td></tr></table>
+
$$ \tag{10 }
 +
y ^ {( n) }  = f ( t , y , \dot{y}, \dots, y ^ {( n - 1 ) } ) .
 +
$$
  
 
An important special class of such equations are linear ordinary differential equations:
 
An important special class of such equations are linear ordinary differential equations:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191089.png" /></td> </tr></table>
+
$$
 +
y ^ {( n) } + a _ {1} ( t) y ^ {( n - 1 ) } + \dots + a _ {n - 1 }  ( t) \dot{y} + a _ {n} ( t) y  = F ( t) .
 +
$$
  
Equation (10) is reduced to a system of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191090.png" /> first-order equations if one introduces new unknown functions of the variable <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191091.png" /> by the formulas
+
Equation (10) is reduced to a system of $  n $
 +
first-order equations if one introduces new unknown functions of the variable $  t $
 +
by the formulas
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191092.png" /></td> </tr></table>
+
$$
 +
x  ^ {1} = y , x  ^ {2} = \dot{y}, \dots, x  ^ {n} = y ^ {( n - 1 ) } .
 +
$$
  
If, for example, equation (10) describes the dynamics of a certain object and the motion of this object is to be studied starting from a definite moment <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191093.png" /> corresponding to a definite initial state, the following additional conditions must be imposed on equation (10):
+
If, for example, equation (10) describes the dynamics of a certain object and the motion of this object is to be studied starting from a definite moment $  t = t _ {0} $
 +
corresponding to a definite initial state, the following additional conditions must be imposed on equation (10):
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191094.png" /></td> <td valign="top" style="width:5%;text-align:right;">(11)</td></tr></table>
+
$$ \tag{11 }
 +
y ( t _ {0} ) = y _ {0} , \dot{y} ( t _ {0} ) = {\dot{y} } _ {0}, \dots,
 +
y ^ {( n - 1 ) } ( t _ {0} ) = y _ {0} ^ {( n - 1 ) } .
 +
$$
  
The task of finding an <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191095.png" /> times differentiable function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191096.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191097.png" />, for which equation (10) becomes an identity for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191098.png" /> and which satisfies the initial conditions (11) is known as the Cauchy problem.
+
The task of finding an $  n $
 +
times differentiable function $  y = y ( t) $,  
 +
$  t \in I $,  
 +
for which equation (10) becomes an identity for all $  t \in I $
 +
and which satisfies the initial conditions (11) is known as the Cauchy problem.
  
 
The existence and uniqueness theorem: If
 
The existence and uniqueness theorem: If
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d03191099.png" /></td> </tr></table>
+
$$
 +
f ( t , u _ {1}, \dots, u _ {n} )  \in  C ( D) ,
 +
$$
  
if it satisfies a Lipschitz condition with respect to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910100.png" /> and if
+
if it satisfies a Lipschitz condition with respect to $  u _ {1}, \dots, u _ {n} $
 +
and if
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910101.png" /></td> </tr></table>
+
$$
 +
( t _ {0} , y _ {0} , {\dot{y} } _ {0}, \dots, {y _ {0} } ^ {( n - 1 ) } )  \in  D ,
 +
$$
  
 
then the Cauchy problem (10), (11) has a unique solution.
 
then the Cauchy problem (10), (11) has a unique solution.
  
The Cauchy problem does not account for all problems which have been studied for equations (10) of higher orders (or systems (5)). Specific physical and technological problems often do not involve initial conditions but rather supplementary conditions of different kinds (so-called boundary conditions), when the values of the function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910102.png" /> being sought and its derivatives (or relations between these derivatives) are given for certain different values of the independent variable. For instance, in the [[Brachistochrone|brachistochrone]] problem, the equation
+
The Cauchy problem does not account for all problems which have been studied for equations (10) of higher orders (or systems (5)). Specific physical and technological problems often do not involve initial conditions but rather supplementary conditions of different kinds (so-called boundary conditions), when the values of the function $  y ( t) $
 +
being sought and its derivatives (or relations between these derivatives) are given for certain different values of the independent variable. For instance, in the [[Brachistochrone|brachistochrone]] problem, the equation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910103.png" /></td> </tr></table>
+
$$
 +
2 y \ddot{y} + {\dot{y} } {}  ^ {2} + 1  = 0
 +
$$
  
is to be integrated under the boundary conditions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910104.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910105.png" />. Finding a <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910106.png" />-periodic solution for the [[Duffing equation|Duffing equation]] is reduced to extracting the solution which satisfies the periodicity conditions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910107.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910108.png" />; in the study of laminar flow around a plate one encounters the problem:
+
is to be integrated under the boundary conditions $  y ( a) = A $,  
 +
$  y ( b) = B $.  
 +
Finding a $  2 \pi $-periodic solution for the [[Duffing equation|Duffing equation]] is reduced to extracting the solution which satisfies the periodicity conditions $  y ( 0) = y ( 2 \pi ) $,  
 +
$  \dot{y} ( 0) = \dot{y} ( 2 \pi ) $;  
 +
in the study of laminar flow around a plate one encounters the problem:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910109.png" /></td> </tr></table>
+
$$
 +
\dddot{y} + y \ddot{y}  = 0 ,\ \
 +
y ( 0)  = \dot{y} ( 0)  = 0 ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910110.png" /></td> </tr></table>
+
$$
 +
\dot{y} ( t)  \rightarrow  2 \  \textrm{ as }  t \rightarrow \infty .
 +
$$
  
 
A problem of finding a solution satisfying conditions different from the initial conditions (11) for ordinary differential equations or for a system of ordinary differential equations is known as a boundary value problem (cf. [[Boundary value problem, ordinary differential equations|Boundary value problem, ordinary differential equations]]). The theoretical analysis of the existence and uniqueness of a solution of a boundary value problem is of importance to the practical problem involved, since it proves the mutual compatibility of the assumptions made in the mathematical description of the problem and the relative completeness of this description. One important boundary value problem is the [[Sturm–Liouville problem|Sturm–Liouville problem]]. Boundary value problems for linear equations and systems are closely connected with problems involving eigen values and eigen functions (cf. [[Eigen function|Eigen function]]; [[Eigen value|Eigen value]]) and also with the [[Spectral analysis|spectral analysis]] of ordinary differential operators.
 
A problem of finding a solution satisfying conditions different from the initial conditions (11) for ordinary differential equations or for a system of ordinary differential equations is known as a boundary value problem (cf. [[Boundary value problem, ordinary differential equations|Boundary value problem, ordinary differential equations]]). The theoretical analysis of the existence and uniqueness of a solution of a boundary value problem is of importance to the practical problem involved, since it proves the mutual compatibility of the assumptions made in the mathematical description of the problem and the relative completeness of this description. One important boundary value problem is the [[Sturm–Liouville problem|Sturm–Liouville problem]]. Boundary value problems for linear equations and systems are closely connected with problems involving eigen values and eigen functions (cf. [[Eigen function|Eigen function]]; [[Eigen value|Eigen value]]) and also with the [[Spectral analysis|spectral analysis]] of ordinary differential operators.
  
The principal task of the theory of ordinary differential equations is the study of solutions of such equations. However, the meaning of such a study of solutions of ordinary differential equations has been understood in various ways at different times. The original trend was to carry out the integration of equations in quadratures, i.e. to obtain a closed formula yielding (in explicit, implicit or parametric form) an expression for the dependence of a specific solution on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910111.png" /> in terms of elementary functions and their integrals. Such formulas, if found, are of help in calculations and in the study of the properties of the solutions. Of special interest is the description of the totality of solutions of a given equation. Under very general assumptions, equation (5) corresponds to a family of vector functions depending on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910112.png" /> arbitrary independent parameters. If the equation of this family has the form
+
The principal task of the theory of ordinary differential equations is the study of solutions of such equations. However, the meaning of such a study of solutions of ordinary differential equations has been understood in various ways at different times. The original trend was to carry out the integration of equations in quadratures, i.e. to obtain a closed formula yielding (in explicit, implicit or parametric form) an expression for the dependence of a specific solution on $  t $
 +
in terms of elementary functions and their integrals. Such formulas, if found, are of help in calculations and in the study of the properties of the solutions. Of special interest is the description of the totality of solutions of a given equation. Under very general assumptions, equation (5) corresponds to a family of vector functions depending on $  n $
 +
arbitrary independent parameters. If the equation of this family has the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910113.png" /></td> </tr></table>
+
$$
 +
\mathbf x  = \pmb\phi ( t , c _ {1}, \dots, c _ {n} ) ,
 +
$$
  
the function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910114.png" /> is said to be the general solution of equation (5).
+
the function $  \pmb\phi $
 +
is said to be the general solution of equation (5).
  
 
However, the first examples of ordinary differential equations which are not integrable in quadratures appeared in mid-19th century. It was found that solutions in closed form can be found for a few classes of equations only (see, for example, [[Bernoulli equation|Bernoulli equation]]; [[Differential equation with total differential|Differential equation with total differential]]; [[Linear ordinary differential equation with constant coefficients|Linear ordinary differential equation with constant coefficients]]). A detailed study was then begun of the most important and frequently encountered equations which cannot be solved in quadratures (e.g. the [[Bessel equation|Bessel equation]]), special notation was introduced for such equations, their properties were studied and their values were tabulated. Many [[Special functions|special functions]] appeared in this way.
 
However, the first examples of ordinary differential equations which are not integrable in quadratures appeared in mid-19th century. It was found that solutions in closed form can be found for a few classes of equations only (see, for example, [[Bernoulli equation|Bernoulli equation]]; [[Differential equation with total differential|Differential equation with total differential]]; [[Linear ordinary differential equation with constant coefficients|Linear ordinary differential equation with constant coefficients]]). A detailed study was then begun of the most important and frequently encountered equations which cannot be solved in quadratures (e.g. the [[Bessel equation|Bessel equation]]), special notation was introduced for such equations, their properties were studied and their values were tabulated. Many [[Special functions|special functions]] appeared in this way.
Line 139: Line 283:
 
All this formed the subject matter of the [[Qualitative theory of differential equations|qualitative theory of differential equations]], established in the late 19th century and still in full development.
 
All this formed the subject matter of the [[Qualitative theory of differential equations|qualitative theory of differential equations]], established in the late 19th century and still in full development.
  
Of decisive importance is the clarification as to whether or not the Cauchy problem is a well-posed problem for an ordinary differential equation. Since in concrete problems the initial values can never be perfectly exact, it is important to find the conditions under which small changes in initial values entail only small changes in the results. The theorem on continuous dependence of the solutions on initial values is valid: Let (8) be the solution of equation (5), where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910115.png" /> and let it satisfy a Lipschitz condition with respect to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910116.png" />; then, for any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910117.png" /> and any compact <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910118.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910119.png" />, it is possible to find a <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910120.png" /> such that the solution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910121.png" /> of this equation, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910122.png" />, is defined on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910123.png" /> and for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910124.png" />,
+
Of decisive importance is the clarification as to whether or not the Cauchy problem is a well-posed problem for an ordinary differential equation. Since in concrete problems the initial values can never be perfectly exact, it is important to find the conditions under which small changes in initial values entail only small changes in the results. The theorem on continuous dependence of the solutions on initial values is valid: Let (8) be the solution of equation (5), where $  \mathbf f ( t , \mathbf x ) \in C ( D) $
 +
and let it satisfy a Lipschitz condition with respect to $  \mathbf x $;  
 +
then, for any $  \epsilon > 0 $
 +
and any compact $  J \subset  I $,  
 +
$  t _ {0} \in J $,  
 +
it is possible to find a $  \delta > 0 $
 +
such that the solution $  \mathbf x ( t , t _ {0} , \mathbf x _ {0}  ^ {*} ) $
 +
of this equation, where $  | \mathbf x _ {0}  ^ {*} - \mathbf x _ {0} | < \delta $,  
 +
is defined on $  J $
 +
and for all $  t \in J $,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910125.png" /></td> <td valign="top" style="width:5%;text-align:right;">(12)</td></tr></table>
+
$$ \tag{12 }
 +
| \mathbf x ( t , t _ {0} , \mathbf x _ {0}  ^ {*} ) -
 +
\mathbf x ( t , t _ {0} , \mathbf x _ {0} ) < \epsilon .
 +
$$
  
 
In other words, if the variations of the independent variable are restricted to a compact interval, then, if the variations in the initial values are sufficiently small, the solution will vary only slightly on the complete interval chosen. This result may also be generalized to obtain conditions which would ensure the [[Differentiability of solutions (of differential equations)|differentiability of solutions (of differential equations)]] with respect to the initial values.
 
In other words, if the variations of the independent variable are restricted to a compact interval, then, if the variations in the initial values are sufficiently small, the solution will vary only slightly on the complete interval chosen. This result may also be generalized to obtain conditions which would ensure the [[Differentiability of solutions (of differential equations)|differentiability of solutions (of differential equations)]] with respect to the initial values.
  
However, this theorem fails to give a complete answer to the problem which is of interest in practical applications, since it only speaks about a compact segment of variation of the independent variable. Now it is often necessary (e.g. in the theory of controlled motion) to deal with the solution of the Cauchy problem (5), (7) defined for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910126.png" />, i.e. to clarify the stability of the solution with respect to small changes in the initial values on the entire infinite interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910127.png" />, i.e. to obtain conditions which would ensure the validity of inequality (12) for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910128.png" />. Studies of the stability of equilibrium positions or of the stationary conditions of a concrete system are reduced to this very problem. A solution which varies only to a small extent on the infinite interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910129.png" /> if the deviations from the initial values are sufficiently small is said to be Lyapunov stable (cf. [[Lyapunov stability|Lyapunov stability]]).
+
However, this theorem fails to give a complete answer to the problem which is of interest in practical applications, since it only speaks about a compact segment of variation of the independent variable. Now it is often necessary (e.g. in the theory of controlled motion) to deal with the solution of the Cauchy problem (5), (7) defined for all $  t \geq  t _ {0} $,  
 +
i.e. to clarify the stability of the solution with respect to small changes in the initial values on the entire infinite interval $  t \geq  t _ {0} $,  
 +
i.e. to obtain conditions which would ensure the validity of inequality (12) for all $  t \geq  t _ {0} $.  
 +
Studies of the stability of equilibrium positions or of the stationary conditions of a concrete system are reduced to this very problem. A solution which varies only to a small extent on the infinite interval $  [ t _ {0} , \infty ) $
 +
if the deviations from the initial values are sufficiently small is said to be Lyapunov stable (cf. [[Lyapunov stability|Lyapunov stability]]).
  
In selecting an ordinary differential equation to describe a real process, some features must always be be neglected and others idealized. This means that a description of a process by ordinary differential equations is only approximate. For instance, the study of the operation of a valve oscillator leads to the [[Van der Pol equation|van der Pol equation]] if certain assumptions, which do not fully correspond to the real state of things, are made. Furthermore, the course of the process is often affected by perturbing factors which are practically impossible to allow for in setting up equations; all that is known is that their effect is "small" . It is therefore important to clarify the variation of the solution as a result of small variations in the system of equations itself, i.e. on passing from equation (5) to the perturbed equation
+
In selecting an ordinary differential equation to describe a real process, some features must always be neglected and others idealized. This means that a description of a process by ordinary differential equations is only approximate. For instance, the study of the operation of a valve oscillator leads to the [[Van der Pol equation|van der Pol equation]] if certain assumptions, which do not fully correspond to the real state of things, are made. Furthermore, the course of the process is often affected by perturbing factors which are practically impossible to allow for in setting up equations; all that is known is that their effect is "small" . It is therefore important to clarify the variation of the solution as a result of small variations in the system of equations itself, i.e. on passing from equation (5) to the perturbed equation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910130.png" /></td> </tr></table>
+
$$
 +
\dot{\mathbf x}  = \mathbf f ( t , \mathbf x ) + \mathbf R ( t , \mathbf x ) ,
 +
$$
  
which allows for small correction terms. It was found that on a compact interval the variations of the independent variable (under the same assumptions as in the theorem of continuous dependence of the solutions on the initial values) produce only small variations in the solution provided the perturbation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910131.png" /> is sufficiently small. If this property is retained on the infinite interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910132.png" />, the solution is said to be stable under constantly acting perturbations.
+
which allows for small correction terms. It was found that on a compact interval the variations of the independent variable (under the same assumptions as in the theorem of continuous dependence of the solutions on the initial values) produce only small variations in the solution provided the perturbation $  \mathbf R ( t , \mathbf x ) $
 +
is sufficiently small. If this property is retained on the infinite interval $  t \geq  t _ {0} $,  
 +
the solution is said to be stable under constantly acting perturbations.
  
 
Studies of Lyapunov stability, stability under constantly acting perturbations and their modifications form the subject of a highly important branch of the qualitative theory — [[Stability theory|stability theory]]. Of foremost interest in practice are systems of ordinary differential equations whose solutions change little for all small variations of these equations; such systems are known as robust systems (cf. [[Rough system|Rough system]]).
 
Studies of Lyapunov stability, stability under constantly acting perturbations and their modifications form the subject of a highly important branch of the qualitative theory — [[Stability theory|stability theory]]. Of foremost interest in practice are systems of ordinary differential equations whose solutions change little for all small variations of these equations; such systems are known as robust systems (cf. [[Rough system|Rough system]]).
Line 159: Line 323:
 
Any real object is characterized by different parameters, which often enter into the right-hand side of the system of ordinary differential equations describing the behaviour of the object,
 
Any real object is characterized by different parameters, which often enter into the right-hand side of the system of ordinary differential equations describing the behaviour of the object,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910133.png" /></td> <td valign="top" style="width:5%;text-align:right;">(13)</td></tr></table>
+
$$ \tag{13 }
 +
\dot{\mathbf x}  = \mathbf f ( t , \mathbf x , \pmb\epsilon ) ,
 +
$$
  
in the form of certain quantities <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910134.png" />. The values of these parameters are never known with perfect accuracy, so that it is important to clarify the conditions ensuring the stability of the solutions of equation (13) to small perturbations of the parameter <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910135.png" />. If the independent variable varies in a given compact interval, then — under certain natural assumptions regarding the right-hand side of equation (13) — the solutions will show a continuous (and even differentiable) dependence on the parameters.
+
in the form of certain quantities $  ( \epsilon  ^ {1}, \dots, \epsilon  ^ {k} ) = \pmb\epsilon $.  
 +
The values of these parameters are never known with perfect accuracy, so that it is important to clarify the conditions ensuring the stability of the solutions of equation (13) to small perturbations of the parameter $  \pmb\epsilon $.  
 +
If the independent variable varies in a given compact interval, then — under certain natural assumptions regarding the right-hand side of equation (13) — the solutions will show a continuous (and even differentiable) dependence on the parameters.
  
 
The clarification of the dependence of the solutions on the parameter is directly related to the question of the quality of the idealization leading to the mathematical model of the behaviour of the object — the system of ordinary differential equations. A typical example of idealization is the neglect of a small parameter. If, with allowance for this small parameter, the system (13) is obtained, then, owing to the fact that the variation of the solutions with the parameter is continuous, it is perfectly permissible to neglect this parameter in the study of the behaviour of the object on a compact interval of time. Thus, as a first approximation, one is considering the simpler system
 
The clarification of the dependence of the solutions on the parameter is directly related to the question of the quality of the idealization leading to the mathematical model of the behaviour of the object — the system of ordinary differential equations. A typical example of idealization is the neglect of a small parameter. If, with allowance for this small parameter, the system (13) is obtained, then, owing to the fact that the variation of the solutions with the parameter is continuous, it is perfectly permissible to neglect this parameter in the study of the behaviour of the object on a compact interval of time. Thus, as a first approximation, one is considering the simpler system
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910136.png" /></td> </tr></table>
+
$$
 +
\dot{\mathbf x}  = \mathbf f ( t , \mathbf x , 0 ) .
 +
$$
  
 
This result is the principle of the extensively employed method of small parameters (cf. [[Small parameter, method of the|Small parameter, method of the]]); the [[Krylov–Bogolyubov method of averaging|Krylov–Bogolyubov method of averaging]] and other asymptotic methods for solving ordinary differential equations. However, the study of a number of phenomena yields a system of [[Differential equations with small parameter|differential equations with small parameter]] in front of the derivative:
 
This result is the principle of the extensively employed method of small parameters (cf. [[Small parameter, method of the|Small parameter, method of the]]); the [[Krylov–Bogolyubov method of averaging|Krylov–Bogolyubov method of averaging]] and other asymptotic methods for solving ordinary differential equations. However, the study of a number of phenomena yields a system of [[Differential equations with small parameter|differential equations with small parameter]] in front of the derivative:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910137.png" /></td> </tr></table>
+
$$
 +
\pmb\epsilon \dot{x}  = f ( t , x , y ) ,\ \
 +
\dot{y}  = g ( t , x , y ) .
 +
$$
  
Here it is in general no longer permissible to assume that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910138.png" />, even if it is attempted to construct a rough representation of the phenomenon on a compact interval of time.
+
Here it is in general no longer permissible to assume that $  \pmb\epsilon = 0 $,  
 +
even if it is attempted to construct a rough representation of the phenomenon on a compact interval of time.
  
The theory of ordinary differential equations considers certain fruitful important generalizations of the problems outlined above. First, one may extend the class of functions within which the solution of the Cauchy problem (2), (3) is sought: Determine the solution in the class of absolutely-continuous functions and prove the existence of such solutions. Of special practical interest is to find the solution of equation (2) if the function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910139.png" /> is discontinuous or many-valued with respect to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910140.png" />. The most general problem in this respect is the problem of solving a [[Differential inclusion|differential inclusion]].
+
The theory of ordinary differential equations considers certain fruitful important generalizations of the problems outlined above. First, one may extend the class of functions within which the solution of the Cauchy problem (2), (3) is sought: Determine the solution in the class of absolutely-continuous functions and prove the existence of such solutions. Of special practical interest is to find the solution of equation (2) if the function $  f ( t , x ) $
 +
is discontinuous or many-valued with respect to $  x $.  
 +
The most general problem in this respect is the problem of solving a [[Differential inclusion|differential inclusion]].
  
Also under consideration are ordinary differential equations of order <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910141.png" /> more general than (10), which are unsolved with respect to the leading derivative
+
Also under consideration are ordinary differential equations of order $  n $
 +
more general than (10), which are unsolved with respect to the leading derivative
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910142.png" /></td> </tr></table>
+
$$
 +
F ( t , y , \dot{y}, \dots, y ^ {( n ) } )  = 0 .
 +
$$
  
 
Studies of this equation are closely connected with the theory of implicit functions.
 
Studies of this equation are closely connected with the theory of implicit functions.
  
Equation (2) connects the derivative of the solution at a point <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910143.png" /> with the value of the solution at this point: <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910144.png" />, but certain applied problems (e.g. those in which allowance must be made for a delaying effect of the executing mechanism) yield retarded ordinary differential equations (cf. [[Differential equations, ordinary, retarded|Differential equations, ordinary, retarded]]):
+
Equation (2) connects the derivative of the solution at a point $  t $
 +
with the value of the solution at this point: $  \dot{x} ( t) = f ( t , x ( t) ) $,  
 +
but certain applied problems (e.g. those in which allowance must be made for a delaying effect of the executing mechanism) yield retarded ordinary differential equations (cf. [[Differential equations, ordinary, retarded|Differential equations, ordinary, retarded]]):
 +
 
 +
$$
 +
\dot{x}  =  f ( t , x ( t - \tau ) ) ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910145.png" /></td> </tr></table>
+
in which the derivative of the solution at a point  $  t $
 +
is connected with the value of the solution at a point  $  t - \tau $.  
 +
A special section of the theory of ordinary differential equations deals with such equations, and also with the more general ordinary differential equations with distributed arguments (cf. [[Differential equations, ordinary, with distributed arguments|Differential equations, ordinary, with distributed arguments]]).
  
in which the derivative of the solution at a point <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910146.png" /> is connected with the value of the solution at a point <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910147.png" />. A special section of the theory of ordinary differential equations deals with such equations, and also with the more general ordinary differential equations with distributed arguments (cf. [[Differential equations, ordinary, with distributed arguments|Differential equations, ordinary, with distributed arguments]]).
+
The study of the phase space of the autonomous system (9) leads to yet another generalization of ordinary differential equations. Denote by  $  \mathbf x = \mathbf x ( t , \mathbf x _ {0} ) $
 +
the trajectory of this system passing through the point $  \mathbf x _ {0} $.  
 +
If the point  $  \mathbf x _ {0} $
 +
is mapped to the point  $  \mathbf x ( t , \mathbf x _ {0} ) $,
 +
one obtains a transformation of the phase space depending on the parameter  $  t $
 +
which determines the motion in this space. The properties of such motions are studied in the theory of dynamical systems. They may be studied not only in Euclidean space but also on manifolds; an example are [[Differential equations on a torus|differential equations on a torus]].
  
The study of the phase space of the autonomous system (9) leads to yet another generalization of ordinary differential equations. Denote by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910148.png" /> the trajectory of this system passing through the point <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910149.png" />. If the point <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910150.png" /> is mapped to the point <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910151.png" />, one obtains a transformation of the phase space depending on the parameter <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910152.png" /> which determines the motion in this space. The properties of such motions are studied in the theory of dynamical systems. They may be studied not only in Euclidean space but also on manifolds; an example are [[Differential equations on a torus|differential equations on a torus]].
+
Above ordinary differential equations in the field of real numbers have been considered (e.g. finding a real-valued function  $  x ( t) $
 +
of a real variable  $  t $
 +
satisfying equation (2)). However, certain properties of such equations are more conveniently studied with the aid of complex numbers. A natural further generalization is the study of ordinary differential equations in the field of complex numbers. Thus, one may consider the equation
  
Above ordinary differential equations in the field of real numbers have been considered (e.g. finding a real-valued function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910153.png" /> of a real variable <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910154.png" /> satisfying equation (2)). However, certain properties of such equations are more conveniently studied with the aid of complex numbers. A natural further generalization is the study of ordinary differential equations in the field of complex numbers. Thus, one may consider the equation
+
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910155.png" /></td> </tr></table>
+
\frac{dw}{dz}
 +
  = f ( z , w ) ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910156.png" /> is an analytic function of its variables, and pose the problem of finding an analytic function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910157.png" /> in the complex variable <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910158.png" /> which would satisfy this equation. The study of such equations, equations of higher orders and systems forms the subject of the [[Analytic theory of differential equations|analytic theory of differential equations]]; in particular, it contains results of importance to mathematical physics, concerning linear ordinary differential equations of the second order (cf. [[Linear ordinary differential equation of the second order|Linear ordinary differential equation of the second order]]).
+
where $  f ( z , w ) $
 +
is an analytic function of its variables, and pose the problem of finding an analytic function $  w ( z) $
 +
in the complex variable $  z $
 +
which would satisfy this equation. The study of such equations, equations of higher orders and systems forms the subject of the [[Analytic theory of differential equations|analytic theory of differential equations]]; in particular, it contains results of importance to mathematical physics, concerning linear ordinary differential equations of the second order (cf. [[Linear ordinary differential equation of the second order|Linear ordinary differential equation of the second order]]).
  
 
One may also consider the equation
 
One may also consider the equation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910159.png" /></td> <td valign="top" style="width:5%;text-align:right;">(14)</td></tr></table>
+
$$ \tag{14 }
  
on the assumption that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910160.png" /> belongs to an infinite-dimensional [[Banach space|Banach space]] <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910161.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910162.png" /> is a real or complex independent variable and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910163.png" /> is an operator mapping the product <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910164.png" /> into <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910165.png" />. Equation (14) may serve in processing, for example, systems of ordinary differential equations of infinite order (cf. [[Differential equations, infinite-order system of|Differential equations, infinite-order system of]]). Equations of the type (14) are studied in the theory of abstract differential equations (cf. [[Differential equation, abstract|Differential equation, abstract]]), which is the meeting point of ordinary differential equations and [[Functional analysis|functional analysis]]. Of major interest are linear differential equations of the form
+
\frac{d \mathbf x }{dt}
 +
  = \mathbf f ( t , \mathbf x )
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910166.png" /></td> </tr></table>
+
on the assumption that  $  \mathbf x $
 +
belongs to an infinite-dimensional [[Banach space|Banach space]]  $  B $,
 +
$  t $
 +
is a real or complex independent variable and  $  \mathbf f ( t , \mathbf x ) $
 +
is an operator mapping the product  $  ( - \infty , + \infty ) \times B $
 +
into  $  B $.
 +
Equation (14) may serve in processing, for example, systems of ordinary differential equations of infinite order (cf. [[Differential equations, infinite-order system of|Differential equations, infinite-order system of]]). Equations of the type (14) are studied in the theory of abstract differential equations (cf. [[Differential equation, abstract|Differential equation, abstract]]), which is the meeting point of ordinary differential equations and [[Functional analysis|functional analysis]]. Of major interest are linear differential equations of the form
 +
 
 +
$$
 +
 
 +
\frac{d \mathbf x }{dt}
 +
  =  A ( t) \mathbf x + \mathbf F ( t)
 +
$$
  
 
with bounded or unbounded operators; certain classes of partial differential equations (cf. [[Differential equation, partial|Differential equation, partial]]) can be written in the form of such an equation.
 
with bounded or unbounded operators; certain classes of partial differential equations (cf. [[Differential equation, partial|Differential equation, partial]]) can be written in the form of such an equation.
  
 
====References====
 
====References====
<table><TR><TD valign="top">[1]</TD> <TD valign="top"> E. Kamke,   "Differentialgleichungen: Lösungen und Lösungsmethoden" , '''1. Gewöhnliche Differentialgleichungen''' , Chelsea, reprint (1947)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top"> E.A. Coddington,   N. Levinson,   "Theory of ordinary differential equations" , McGraw-Hill (1955)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top"> S. Lefschetz,   "Differential equations: geometric theory" , Interscience (1957)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top"> I.G. Petrovskii,   "Ordinary differential equations" , Prentice-Hall (1966) (Translated from Russian)</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top"> L.S. Pontryagin,   "Ordinary differential equations" , Addison-Wesley (1962) (Translated from Russian)</TD></TR><TR><TD valign="top">[6]</TD> <TD valign="top"> G. Sansone,   "Ordinary differential equations" , '''1–2''' , Zanichelli (1948–1949) (In Italian)</TD></TR><TR><TD valign="top">[7]</TD> <TD valign="top"> P. Hartman,   "Ordinary differential equations" , Birkhäuser (1982)</TD></TR></table>
+
<table><TR><TD valign="top">[1]</TD> <TD valign="top"> E. Kamke, "Differentialgleichungen: Lösungen und Lösungsmethoden" , '''1. Gewöhnliche Differentialgleichungen''' , Chelsea, reprint (1947)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top"> E.A. Coddington, N. Levinson, "Theory of ordinary differential equations" , McGraw-Hill (1955) {{MR|0069338}} {{ZBL|0064.33002}} </TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top"> S. Lefschetz, "Differential equations: geometric theory" , Interscience (1957) {{MR|0094488}} {{ZBL|0080.06401}} </TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top"> I.G. Petrovskii, "Ordinary differential equations" , Prentice-Hall (1966) (Translated from Russian) {{MR|0193298}} {{ZBL|}} </TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top"> L.S. Pontryagin, "Ordinary differential equations" , Addison-Wesley (1962) (Translated from Russian) {{MR|0140742}} {{ZBL|0112.05502}} </TD></TR><TR><TD valign="top">[6]</TD> <TD valign="top"> G. Sansone, "Ordinary differential equations" , '''1–2''' , Zanichelli (1948–1949) (In Italian) {{MR|0159075}} {{MR|0183915}} {{MR|0064221}} {{ZBL|0429.34003}} {{ZBL|0125.05102}} {{ZBL|0108.08703}} </TD></TR><TR><TD valign="top">[7]</TD> <TD valign="top"> P. Hartman, "Ordinary differential equations" , Birkhäuser (1982) {{MR|0658490}} {{ZBL|0476.34002}} </TD></TR></table>
 
 
 
 
  
 
====Comments====
 
====Comments====
The collection of all trajectories <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910167.png" /> is often referred to as the phase portrait of a differential equation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910168.png" />. In connection with stability under various kinds of perturbations, persistence of certain features of the phase portrait, such as persistence of equilibria and persistence of closed orbits, is often of importance: cf. e.g. [[#References|[a4]]], Chapt. 16, for a number of results. Particularly nice robust systems are the structurally stable ones. A structurally stable differential equation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910169.png" /> on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910170.png" /> is one such that if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910171.png" /> is a sufficiently nearby equation, i.e. if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910172.png" /> is near <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910173.png" /> is some suitable sense, then there is a homeomorphism of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910174.png" /> into itself (that is, a one-to-one onto continuous mapping <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910175.png" /> whose inverse is also continuous) which takes the phase portrait of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910176.png" /> into the phase portrait of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d031/d031910/d031910177.png" />. Cf. [[#References|[a4]]] and [[Rough system|Rough system]] for some more details.
+
The collection of all trajectories $  \{ \mathbf x ( t) \} \subset  D $
 +
is often referred to as the phase portrait of a differential equation $  \dot{x} = f ( x) $.  
 +
In connection with stability under various kinds of perturbations, persistence of certain features of the phase portrait, such as persistence of equilibria and persistence of closed orbits, is often of importance: cf. e.g. [[#References|[a4]]], Chapt. 16, for a number of results. Particularly nice robust systems are the structurally stable ones. A structurally stable differential equation $  \dot{x} = f ( x) $
 +
on $  D $
 +
is one such that if $  \dot{x} = g ( x) $
 +
is a sufficiently nearby equation, i.e. if $  f $
 +
is near $  g $
 +
is some suitable sense, then there is a homeomorphism of $  D $
 +
into itself (that is, a one-to-one onto continuous mapping $  D \rightarrow D $
 +
whose inverse is also continuous) which takes the phase portrait of $  \dot{x} = g ( x) $
 +
into the phase portrait of $  \dot{x} = f ( x) $.  
 +
Cf. [[#References|[a4]]] and [[Rough system|Rough system]] for some more details.
  
 
Cf. [[#References|[a5]]] for a comprehensive account of differential equations with discontinuous right-hand side.
 
Cf. [[#References|[a5]]] for a comprehensive account of differential equations with discontinuous right-hand side.
  
 
====References====
 
====References====
<table><TR><TD valign="top">[a1]</TD> <TD valign="top"> V.I. Arnol'd,   "Geometrical methods in the theory of ordinary differential equations" , Springer (1983) (Translated from Russian)</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top"> V.I. Arnol'd,   "Ordinary differential equations" , M.I.T. (1973) (Translated from Russian)</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top"> J.K. Hale,   "Ordinary differential equations" , Wiley (1969)</TD></TR><TR><TD valign="top">[a4]</TD> <TD valign="top"> M.W. Hirsch,   S. Smale,   "Differential equations, dynamical systems, and linear algebra" , Acad. Press (1974)</TD></TR><TR><TD valign="top">[a5]</TD> <TD valign="top"> A.F. Filippov,   "Differential equations with discontinuous right-hand sides" , Kluwer (1988) (Translated from Russian)</TD></TR></table>
+
<table><TR><TD valign="top">[a1]</TD> <TD valign="top"> V.I. Arnol'd, "Geometrical methods in the theory of ordinary differential equations" , Springer (1983) (Translated from Russian)</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top"> V.I. Arnol'd, "Ordinary differential equations" , M.I.T. (1973) (Translated from Russian) {{MR|}} {{ZBL|1049.34001}} {{ZBL|0744.34001}} {{ZBL|0659.58012}} {{ZBL|0602.58020}} {{ZBL|0577.34001}} {{ZBL|0956.34502}} {{ZBL|0956.34501}} {{ZBL|0956.34503}} {{ZBL|0237.34008}} {{ZBL|0135.42601}} </TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top"> J.K. Hale, "Ordinary differential equations" , Wiley (1969) {{MR|0419901}} {{ZBL|0186.40901}} </TD></TR><TR><TD valign="top">[a4]</TD> <TD valign="top"> M.W. Hirsch, S. Smale, "Differential equations, dynamical systems, and linear algebra" , Acad. Press (1974) {{MR|0486784}} {{ZBL|0309.34001}} </TD></TR><TR><TD valign="top">[a5]</TD> <TD valign="top"> A.F. Filippov, "Differential equations with discontinuous right-hand sides" , Kluwer (1988) (Translated from Russian) {{MR|2118433}} {{MR|1850551}} {{MR|1354280}} {{MR|1334843}} {{MR|0790682}} {{MR|0114016}} {{ZBL|0664.34001}} </TD></TR></table>

Latest revision as of 01:50, 23 January 2022


An equation with a function in one independent variable as unknown, containing not only the unknown function itself, but also its derivatives of various orders.

The term "differential equations" was proposed in 1676 by G. Leibniz. The first studies of these equations were carried out in the late 17th century in the context of certain problems in mechanics and geometry.

Ordinary differential equations have important applications and are a powerful tool in the study of many problems in the natural sciences and in technology; they are extensively employed in mechanics, astronomy, physics, and in many problems of chemistry and biology. The reason for this is the fact that objective laws governing certain phenomena (processes) can be written as ordinary differential equations, so that the equations themselves are a quantitative expression of these laws. For instance, Newton's laws of mechanics make it possible to reduce the description of the motion of mass points or solid bodies to solving ordinary differential equations. The computation of radiotechnical circuits or satellite trajectories, studies of the stability of a plane in flight, and explaining the course of chemical reactions are all carried out by studying and solving ordinary differential equations. The most interesting and most important applications of these equations are in the theory of oscillations (cf. Oscillations, theory of) and in automatic control theory. Applied problems in turn produce new formulations of problems in the theory of ordinary differential equations; the mathematical theory of optimal control (cf. Optimal control, mathematical theory of) in fact arose in this manner.

In what follows the independent variable is denoted by $ t $, the unknown functions by $ x , y , z $, etc., while the derivatives of these functions with respect to $ t $ will be denoted by $ \dot{x} , \ddot{x} , \dots, x ^ {( n) } $, etc.

The simplest ordinary differential equation is already encountered in analysis: The problem of finding the primitive function of a given continuous function $ f ( t) $ amounts to finding an unknown function $ x ( t) $ which satisfies the equation

$$ \tag{1 } \dot{x} = f ( t) . $$

In order to prove that this equation is solvable, a special apparatus had to be developed — the theory of the Riemann integral.

A natural generalization of equation (1) is an ordinary differential equation of the first order, solved with respect to the derivative:

$$ \tag{2 } \dot{x} ( t) = f ( t , x ) , $$

where $ f ( t , x ) $ is a known function, defined in a certain region of the $ ( t , x ) $-plane. Many practical problems can be reduced to the solution (or, as is often said, the integration) of this equation. A solution of the ordinary differential equation (2) is a function $ x ( t) $ defined and differentiable on some interval $ I $ and satisfying the conditions

$$ ( t , x ( t) ) \in D ,\ t \in I , $$

$$ \dot{x} ( t) = f ( t , x ( t) ) ,\ t \in I . $$

The solution of (2) may be geometrically represented in the $ ( t , x ) $-plane as a curve with equation $ x = x ( t) $, $ t \in I $. This curve is known as an integral curve, with a tangent at every point, and is totally contained in $ D $. The geometrical interpretation of equation (2) itself is as a field of directions in $ D $, obtained by drawing a segment $ l _ {t , x } $ of small length with angular coefficient $ f ( t , x ) $ through each point $ ( t , x ) \in D $. Any integral curve $ x = x ( t) $ at each of its points is tangent to the segment $ l _ {t , x ( t) } $.

The existence theorem answers the question of the existence of a solution of equation (2): If $ f ( t , x ) \in C( D) $ (i.e. is continuous in $ D $), then at least one continuously-differentiable integral curve of equation (2) passes through any point $ ( t _ {0} , x _ {0} ) \in D $, and each such curve may be extended in both directions up to the boundary of any closed subregion lying completely in $ D $ and containing the point $ ( t _ {0} , x _ {0} ) $. In other words, for any point $ ( t _ {0} , x _ {0} ) \in D $ it is possible to find at least one non-extendable solution $ x = x ( t) $, $ t \in I $, such that $ x ( t) \in C ^ {1} ( I) $ (i.e. $ x $ is continuous in $ I $ together with its derivative $ \dot{x} $),

$$ \tag{3 } x ( t _ {0} ) = x _ {0} , $$

and $ x ( t) $ tends to the boundary of $ D $ as $ t $ tends to the right or left end of the interval $ I $.

A very important theoretical problem is to clarify the assumptions to be made concerning the right-hand side of an ordinary differential equation and the additional conditions to be imposed on the equation in order that it has a unique solution. The following existence and uniqueness theorem is valid: If $ f ( t , x ) \in C( D) $ satisfies a Lipschitz condition with respect to $ x $ in $ D $ and if $ ( t _ {0} , x _ {0} ) \in D $, then equation (2) has a unique, non-extendable solution satisfying condition (3). In particular, if two solutions $ x _ {1} ( t) $, $ t \in I _ {1} $, and $ x _ {2} ( t) $, $ t \in I _ {2} $, of such an equation (2) coincide for at least one value $ t = t _ {0} $, i.e. $ x _ {1} ( t _ {0} ) = x _ {2} ( t _ {0} ) $, then

$$ x _ {1} ( t) = x _ {2} ( t) ,\ t \in I _ {1} \cap I _ {2} . $$

The geometrical content of this theorem is that the entire region $ D $ is covered by integral curves of equation (2), with no intersections between any two curves. Unique solutions may also be obtained under weaker assumptions regarding the function $ f ( t , x ) $ [6].

The relation (3) is known as an initial condition. The numbers $ t _ {0} $ and $ x _ {0} $ are called initial values for the solution of equation (2), while the point $ ( t _ {0} , x _ {0} ) $ is called the initial point corresponding to the integral curve. The task of finding the solution of this equation satisfying initial condition (3) (or, in other words, with initial values $ t _ {0} $, $ x _ {0} $) is known as the Cauchy problem or the initial value problem. The theorem just given provides sufficient conditions for the unique solvability of the Cauchy problem (2), (3).

Applied problems often involve systems of ordinary differential equations, containing several unknown functions of the same variable and their derivatives. A natural generalization of equation (2) is the normal form of a system of differential equations of order $ n $:

$$ \tag{4 } {\dot{x} } {} ^ {i} = f ^ { i } ( t , x ^ {1}, \dots, x ^ {n} ) ,\ i = 1, \dots, n , $$

where $ x ^ {1}, \dots, x ^ {n} $ are unknown functions of the variable $ t $ and $ f ^ { i } $, $ i = 1, \dots, n $, are given functions in $ n + 1 $ variables. Writing

$$ \mathbf x = ( x ^ {1}, \dots, x ^ {n} ) , $$

$$ \mathbf f ( t , \mathbf x ) = ( f ^ { 1 } ( t , \mathbf x ), \dots, f ^ { n } ( t , \mathbf x ) ) , $$

the system (4) takes the vector form:

$$ \tag{5 } \dot{\mathbf x} = \mathbf f ( t , \mathbf x ) . $$

The vector function

$$ \tag{6 } \mathbf x = \mathbf x ( t) = ( x ^ {1} ( t), \dots, x ^ {n} ( t) ) , \ t \in I , $$

is a solution of the system (4) or of the vector equation (5). Each solution can be represented in the $ ( n + 1 ) $-dimensional space $ t , x ^ {1}, \dots, x ^ {n} $ as an integral curve — the graph of the vector function (6).

The Cauchy problem for equation (5) is to find the solution satisfying the initial conditions

$$ x ^ {1} ( t _ {0} ) = x _ {0} ^ {1}, \dots, x ^ {n} ( t _ {0} ) = x _ {0} ^ {n} , $$

or

$$ \tag{7 } \mathbf x ( t _ {0} ) = \mathbf x _ {0} . $$

The solution of the Cauchy problem (5), (7) is conveniently written as

$$ \tag{8 } \mathbf x = \mathbf x ( t , t _ {0} , \mathbf x _ {0} ) ,\ t \in I . $$

The existence and uniqueness theorem for equation (5) is formulated as for equation (2).

Very general systems of ordinary differential equations (solved with respect to the leading derivatives of all unknown functions) are reducible to normal systems. An important special class of systems (5) are linear systems of $ n $ (coupled) ordinary differential equations of the first order:

$$ \dot{\mathbf x} = A ( t) \mathbf x + \mathbf F ( t) , $$

where $ A ( t) $ is an $ ( n \times n) $-dimensional matrix.

Of major importance in applications and in the theory of ordinary differential equations are autonomous systems of ordinary differential equations (cf. Autonomous system):

$$ \tag{9 } \dot{\mathbf x} = \mathbf f ( \mathbf x ) , $$

i.e. normal systems whose right-hand side does not explicitly depend on the variable $ t $. In such a case equation (6) is conveniently regarded as a parametric representation of a curve, by regarding the solution as the phase trajectory in the $ n $-dimensional phase space $ x ^ {1}, \dots, x ^ {n} $. If $ \mathbf x = \mathbf x ( t) $ is a solution of the system (9), the function $ \mathbf x = \mathbf x ( t + c ) $, where $ c $ is an arbitrary constant, will also satisfy (9).

Another generalization of equation (2) is an ordinary differential equation of order $ n $, solved with respect to its leading derivative:

$$ \tag{10 } y ^ {( n) } = f ( t , y , \dot{y}, \dots, y ^ {( n - 1 ) } ) . $$

An important special class of such equations are linear ordinary differential equations:

$$ y ^ {( n) } + a _ {1} ( t) y ^ {( n - 1 ) } + \dots + a _ {n - 1 } ( t) \dot{y} + a _ {n} ( t) y = F ( t) . $$

Equation (10) is reduced to a system of $ n $ first-order equations if one introduces new unknown functions of the variable $ t $ by the formulas

$$ x ^ {1} = y , x ^ {2} = \dot{y}, \dots, x ^ {n} = y ^ {( n - 1 ) } . $$

If, for example, equation (10) describes the dynamics of a certain object and the motion of this object is to be studied starting from a definite moment $ t = t _ {0} $ corresponding to a definite initial state, the following additional conditions must be imposed on equation (10):

$$ \tag{11 } y ( t _ {0} ) = y _ {0} , \dot{y} ( t _ {0} ) = {\dot{y} } _ {0}, \dots, y ^ {( n - 1 ) } ( t _ {0} ) = y _ {0} ^ {( n - 1 ) } . $$

The task of finding an $ n $ times differentiable function $ y = y ( t) $, $ t \in I $, for which equation (10) becomes an identity for all $ t \in I $ and which satisfies the initial conditions (11) is known as the Cauchy problem.

The existence and uniqueness theorem: If

$$ f ( t , u _ {1}, \dots, u _ {n} ) \in C ( D) , $$

if it satisfies a Lipschitz condition with respect to $ u _ {1}, \dots, u _ {n} $ and if

$$ ( t _ {0} , y _ {0} , {\dot{y} } _ {0}, \dots, {y _ {0} } ^ {( n - 1 ) } ) \in D , $$

then the Cauchy problem (10), (11) has a unique solution.

The Cauchy problem does not account for all problems which have been studied for equations (10) of higher orders (or systems (5)). Specific physical and technological problems often do not involve initial conditions but rather supplementary conditions of different kinds (so-called boundary conditions), when the values of the function $ y ( t) $ being sought and its derivatives (or relations between these derivatives) are given for certain different values of the independent variable. For instance, in the brachistochrone problem, the equation

$$ 2 y \ddot{y} + {\dot{y} } {} ^ {2} + 1 = 0 $$

is to be integrated under the boundary conditions $ y ( a) = A $, $ y ( b) = B $. Finding a $ 2 \pi $-periodic solution for the Duffing equation is reduced to extracting the solution which satisfies the periodicity conditions $ y ( 0) = y ( 2 \pi ) $, $ \dot{y} ( 0) = \dot{y} ( 2 \pi ) $; in the study of laminar flow around a plate one encounters the problem:

$$ \dddot{y} + y \ddot{y} = 0 ,\ \ y ( 0) = \dot{y} ( 0) = 0 , $$

$$ \dot{y} ( t) \rightarrow 2 \ \textrm{ as } t \rightarrow \infty . $$

A problem of finding a solution satisfying conditions different from the initial conditions (11) for ordinary differential equations or for a system of ordinary differential equations is known as a boundary value problem (cf. Boundary value problem, ordinary differential equations). The theoretical analysis of the existence and uniqueness of a solution of a boundary value problem is of importance to the practical problem involved, since it proves the mutual compatibility of the assumptions made in the mathematical description of the problem and the relative completeness of this description. One important boundary value problem is the Sturm–Liouville problem. Boundary value problems for linear equations and systems are closely connected with problems involving eigen values and eigen functions (cf. Eigen function; Eigen value) and also with the spectral analysis of ordinary differential operators.

The principal task of the theory of ordinary differential equations is the study of solutions of such equations. However, the meaning of such a study of solutions of ordinary differential equations has been understood in various ways at different times. The original trend was to carry out the integration of equations in quadratures, i.e. to obtain a closed formula yielding (in explicit, implicit or parametric form) an expression for the dependence of a specific solution on $ t $ in terms of elementary functions and their integrals. Such formulas, if found, are of help in calculations and in the study of the properties of the solutions. Of special interest is the description of the totality of solutions of a given equation. Under very general assumptions, equation (5) corresponds to a family of vector functions depending on $ n $ arbitrary independent parameters. If the equation of this family has the form

$$ \mathbf x = \pmb\phi ( t , c _ {1}, \dots, c _ {n} ) , $$

the function $ \pmb\phi $ is said to be the general solution of equation (5).

However, the first examples of ordinary differential equations which are not integrable in quadratures appeared in mid-19th century. It was found that solutions in closed form can be found for a few classes of equations only (see, for example, Bernoulli equation; Differential equation with total differential; Linear ordinary differential equation with constant coefficients). A detailed study was then begun of the most important and frequently encountered equations which cannot be solved in quadratures (e.g. the Bessel equation), special notation was introduced for such equations, their properties were studied and their values were tabulated. Many special functions appeared in this way.

Because of practical demands, methods of approximate integration of ordinary differential equations were also developed, such as the method of sequential approximation (cf. Sequential approximation, method of), the Adams method, etc. Various methods for graphical and mechanical integration of these equations were proposed. Mathematical analysis offers of a rich selection of numerical methods for solving many problems in ordinary differential equations (cf. Differential equations, ordinary, approximate methods of solution of). These methods are convenient computational algorithms with effective estimates of accuracy, and the modern computational techniques make it possible to obtain a numerical solution to each such problem in an economical and rapid manner.

However, the application of numerical methods to a specific equation yields only a finite number of particular solutions on a finite segment of variation of the independent variable. They cannot yield information about the asymptotic behaviour of the solutions, and cannot tell if a certain equation has a periodic solution or an oscillating solution. It is often important in many practical problems to establish the nature of the solution on an infinite interval of variation of the independent variable, and to obtain a complete picture of the integral curves. For this reason, the main trend in the theory of ordinary differential equations underwent a switchover to the study of the general features in the behaviour of solutions of ordinary differential equations, and to the development of methods for studying the global properties of solutions from the differential equation itself, without recourse to its integration.

All this formed the subject matter of the qualitative theory of differential equations, established in the late 19th century and still in full development.

Of decisive importance is the clarification as to whether or not the Cauchy problem is a well-posed problem for an ordinary differential equation. Since in concrete problems the initial values can never be perfectly exact, it is important to find the conditions under which small changes in initial values entail only small changes in the results. The theorem on continuous dependence of the solutions on initial values is valid: Let (8) be the solution of equation (5), where $ \mathbf f ( t , \mathbf x ) \in C ( D) $ and let it satisfy a Lipschitz condition with respect to $ \mathbf x $; then, for any $ \epsilon > 0 $ and any compact $ J \subset I $, $ t _ {0} \in J $, it is possible to find a $ \delta > 0 $ such that the solution $ \mathbf x ( t , t _ {0} , \mathbf x _ {0} ^ {*} ) $ of this equation, where $ | \mathbf x _ {0} ^ {*} - \mathbf x _ {0} | < \delta $, is defined on $ J $ and for all $ t \in J $,

$$ \tag{12 } | \mathbf x ( t , t _ {0} , \mathbf x _ {0} ^ {*} ) - \mathbf x ( t , t _ {0} , \mathbf x _ {0} ) | < \epsilon . $$

In other words, if the variations of the independent variable are restricted to a compact interval, then, if the variations in the initial values are sufficiently small, the solution will vary only slightly on the complete interval chosen. This result may also be generalized to obtain conditions which would ensure the differentiability of solutions (of differential equations) with respect to the initial values.

However, this theorem fails to give a complete answer to the problem which is of interest in practical applications, since it only speaks about a compact segment of variation of the independent variable. Now it is often necessary (e.g. in the theory of controlled motion) to deal with the solution of the Cauchy problem (5), (7) defined for all $ t \geq t _ {0} $, i.e. to clarify the stability of the solution with respect to small changes in the initial values on the entire infinite interval $ t \geq t _ {0} $, i.e. to obtain conditions which would ensure the validity of inequality (12) for all $ t \geq t _ {0} $. Studies of the stability of equilibrium positions or of the stationary conditions of a concrete system are reduced to this very problem. A solution which varies only to a small extent on the infinite interval $ [ t _ {0} , \infty ) $ if the deviations from the initial values are sufficiently small is said to be Lyapunov stable (cf. Lyapunov stability).

In selecting an ordinary differential equation to describe a real process, some features must always be neglected and others idealized. This means that a description of a process by ordinary differential equations is only approximate. For instance, the study of the operation of a valve oscillator leads to the van der Pol equation if certain assumptions, which do not fully correspond to the real state of things, are made. Furthermore, the course of the process is often affected by perturbing factors which are practically impossible to allow for in setting up equations; all that is known is that their effect is "small" . It is therefore important to clarify the variation of the solution as a result of small variations in the system of equations itself, i.e. on passing from equation (5) to the perturbed equation

$$ \dot{\mathbf x} = \mathbf f ( t , \mathbf x ) + \mathbf R ( t , \mathbf x ) , $$

which allows for small correction terms. It was found that on a compact interval the variations of the independent variable (under the same assumptions as in the theorem of continuous dependence of the solutions on the initial values) produce only small variations in the solution provided the perturbation $ \mathbf R ( t , \mathbf x ) $ is sufficiently small. If this property is retained on the infinite interval $ t \geq t _ {0} $, the solution is said to be stable under constantly acting perturbations.

Studies of Lyapunov stability, stability under constantly acting perturbations and their modifications form the subject of a highly important branch of the qualitative theory — stability theory. Of foremost interest in practice are systems of ordinary differential equations whose solutions change little for all small variations of these equations; such systems are known as robust systems (cf. Rough system).

Another important task in the qualitative theory is to obtain a pattern of the behaviour of the family of solutions throughout the domain of definition of the equation. In the case of the autonomous system (9) the problem is the construction of a phase picture, i.e. a qualitative overall description of the totality of phase trajectories in the phase space. Such a geometric picture gives a complete representation of the nature of all motions which may take place in the system under study. It is therefore important, first of all, to clarify the behaviour of the trajectories in a neighbourhood of equilibrium positions, and to find separatrices (cf. Separatrix) and limit cycles (cf. Limit cycle). An especially urgent task is to find stable limit cycles, since these correspond to auto-oscillations in real systems (cf. Auto-oscillation).

Any real object is characterized by different parameters, which often enter into the right-hand side of the system of ordinary differential equations describing the behaviour of the object,

$$ \tag{13 } \dot{\mathbf x} = \mathbf f ( t , \mathbf x , \pmb\epsilon ) , $$

in the form of certain quantities $ ( \epsilon ^ {1}, \dots, \epsilon ^ {k} ) = \pmb\epsilon $. The values of these parameters are never known with perfect accuracy, so that it is important to clarify the conditions ensuring the stability of the solutions of equation (13) to small perturbations of the parameter $ \pmb\epsilon $. If the independent variable varies in a given compact interval, then — under certain natural assumptions regarding the right-hand side of equation (13) — the solutions will show a continuous (and even differentiable) dependence on the parameters.

The clarification of the dependence of the solutions on the parameter is directly related to the question of the quality of the idealization leading to the mathematical model of the behaviour of the object — the system of ordinary differential equations. A typical example of idealization is the neglect of a small parameter. If, with allowance for this small parameter, the system (13) is obtained, then, owing to the fact that the variation of the solutions with the parameter is continuous, it is perfectly permissible to neglect this parameter in the study of the behaviour of the object on a compact interval of time. Thus, as a first approximation, one is considering the simpler system

$$ \dot{\mathbf x} = \mathbf f ( t , \mathbf x , 0 ) . $$

This result is the principle of the extensively employed method of small parameters (cf. Small parameter, method of the); the Krylov–Bogolyubov method of averaging and other asymptotic methods for solving ordinary differential equations. However, the study of a number of phenomena yields a system of differential equations with small parameter in front of the derivative:

$$ \pmb\epsilon \dot{x} = f ( t , x , y ) ,\ \ \dot{y} = g ( t , x , y ) . $$

Here it is in general no longer permissible to assume that $ \pmb\epsilon = 0 $, even if it is attempted to construct a rough representation of the phenomenon on a compact interval of time.

The theory of ordinary differential equations considers certain fruitful important generalizations of the problems outlined above. First, one may extend the class of functions within which the solution of the Cauchy problem (2), (3) is sought: Determine the solution in the class of absolutely-continuous functions and prove the existence of such solutions. Of special practical interest is to find the solution of equation (2) if the function $ f ( t , x ) $ is discontinuous or many-valued with respect to $ x $. The most general problem in this respect is the problem of solving a differential inclusion.

Also under consideration are ordinary differential equations of order $ n $ more general than (10), which are unsolved with respect to the leading derivative

$$ F ( t , y , \dot{y}, \dots, y ^ {( n ) } ) = 0 . $$

Studies of this equation are closely connected with the theory of implicit functions.

Equation (2) connects the derivative of the solution at a point $ t $ with the value of the solution at this point: $ \dot{x} ( t) = f ( t , x ( t) ) $, but certain applied problems (e.g. those in which allowance must be made for a delaying effect of the executing mechanism) yield retarded ordinary differential equations (cf. Differential equations, ordinary, retarded):

$$ \dot{x} = f ( t , x ( t - \tau ) ) , $$

in which the derivative of the solution at a point $ t $ is connected with the value of the solution at a point $ t - \tau $. A special section of the theory of ordinary differential equations deals with such equations, and also with the more general ordinary differential equations with distributed arguments (cf. Differential equations, ordinary, with distributed arguments).

The study of the phase space of the autonomous system (9) leads to yet another generalization of ordinary differential equations. Denote by $ \mathbf x = \mathbf x ( t , \mathbf x _ {0} ) $ the trajectory of this system passing through the point $ \mathbf x _ {0} $. If the point $ \mathbf x _ {0} $ is mapped to the point $ \mathbf x ( t , \mathbf x _ {0} ) $, one obtains a transformation of the phase space depending on the parameter $ t $ which determines the motion in this space. The properties of such motions are studied in the theory of dynamical systems. They may be studied not only in Euclidean space but also on manifolds; an example are differential equations on a torus.

Above ordinary differential equations in the field of real numbers have been considered (e.g. finding a real-valued function $ x ( t) $ of a real variable $ t $ satisfying equation (2)). However, certain properties of such equations are more conveniently studied with the aid of complex numbers. A natural further generalization is the study of ordinary differential equations in the field of complex numbers. Thus, one may consider the equation

$$ \frac{dw}{dz} = f ( z , w ) , $$

where $ f ( z , w ) $ is an analytic function of its variables, and pose the problem of finding an analytic function $ w ( z) $ in the complex variable $ z $ which would satisfy this equation. The study of such equations, equations of higher orders and systems forms the subject of the analytic theory of differential equations; in particular, it contains results of importance to mathematical physics, concerning linear ordinary differential equations of the second order (cf. Linear ordinary differential equation of the second order).

One may also consider the equation

$$ \tag{14 } \frac{d \mathbf x }{dt} = \mathbf f ( t , \mathbf x ) $$

on the assumption that $ \mathbf x $ belongs to an infinite-dimensional Banach space $ B $, $ t $ is a real or complex independent variable and $ \mathbf f ( t , \mathbf x ) $ is an operator mapping the product $ ( - \infty , + \infty ) \times B $ into $ B $. Equation (14) may serve in processing, for example, systems of ordinary differential equations of infinite order (cf. Differential equations, infinite-order system of). Equations of the type (14) are studied in the theory of abstract differential equations (cf. Differential equation, abstract), which is the meeting point of ordinary differential equations and functional analysis. Of major interest are linear differential equations of the form

$$ \frac{d \mathbf x }{dt} = A ( t) \mathbf x + \mathbf F ( t) $$

with bounded or unbounded operators; certain classes of partial differential equations (cf. Differential equation, partial) can be written in the form of such an equation.

References

[1] E. Kamke, "Differentialgleichungen: Lösungen und Lösungsmethoden" , 1. Gewöhnliche Differentialgleichungen , Chelsea, reprint (1947)
[2] E.A. Coddington, N. Levinson, "Theory of ordinary differential equations" , McGraw-Hill (1955) MR0069338 Zbl 0064.33002
[3] S. Lefschetz, "Differential equations: geometric theory" , Interscience (1957) MR0094488 Zbl 0080.06401
[4] I.G. Petrovskii, "Ordinary differential equations" , Prentice-Hall (1966) (Translated from Russian) MR0193298
[5] L.S. Pontryagin, "Ordinary differential equations" , Addison-Wesley (1962) (Translated from Russian) MR0140742 Zbl 0112.05502
[6] G. Sansone, "Ordinary differential equations" , 1–2 , Zanichelli (1948–1949) (In Italian) MR0159075 MR0183915 MR0064221 Zbl 0429.34003 Zbl 0125.05102 Zbl 0108.08703
[7] P. Hartman, "Ordinary differential equations" , Birkhäuser (1982) MR0658490 Zbl 0476.34002

Comments

The collection of all trajectories $ \{ \mathbf x ( t) \} \subset D $ is often referred to as the phase portrait of a differential equation $ \dot{x} = f ( x) $. In connection with stability under various kinds of perturbations, persistence of certain features of the phase portrait, such as persistence of equilibria and persistence of closed orbits, is often of importance: cf. e.g. [a4], Chapt. 16, for a number of results. Particularly nice robust systems are the structurally stable ones. A structurally stable differential equation $ \dot{x} = f ( x) $ on $ D $ is one such that if $ \dot{x} = g ( x) $ is a sufficiently nearby equation, i.e. if $ f $ is near $ g $ is some suitable sense, then there is a homeomorphism of $ D $ into itself (that is, a one-to-one onto continuous mapping $ D \rightarrow D $ whose inverse is also continuous) which takes the phase portrait of $ \dot{x} = g ( x) $ into the phase portrait of $ \dot{x} = f ( x) $. Cf. [a4] and Rough system for some more details.

Cf. [a5] for a comprehensive account of differential equations with discontinuous right-hand side.

References

[a1] V.I. Arnol'd, "Geometrical methods in the theory of ordinary differential equations" , Springer (1983) (Translated from Russian)
[a2] V.I. Arnol'd, "Ordinary differential equations" , M.I.T. (1973) (Translated from Russian) Zbl 1049.34001 Zbl 0744.34001 Zbl 0659.58012 Zbl 0602.58020 Zbl 0577.34001 Zbl 0956.34502 Zbl 0956.34501 Zbl 0956.34503 Zbl 0237.34008 Zbl 0135.42601
[a3] J.K. Hale, "Ordinary differential equations" , Wiley (1969) MR0419901 Zbl 0186.40901
[a4] M.W. Hirsch, S. Smale, "Differential equations, dynamical systems, and linear algebra" , Acad. Press (1974) MR0486784 Zbl 0309.34001
[a5] A.F. Filippov, "Differential equations with discontinuous right-hand sides" , Kluwer (1988) (Translated from Russian) MR2118433 MR1850551 MR1354280 MR1334843 MR0790682 MR0114016 Zbl 0664.34001
How to Cite This Entry:
Differential equation, ordinary. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Differential_equation,_ordinary&oldid=13954
This article was adapted from an original article by E.F. Mishchenko (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article