Namespaces
Variants
Actions

Difference between revisions of "Stability for a part of the variables"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (tex encoded by computer)
 
Line 1: Line 1:
[[Lyapunov stability|Lyapunov stability]] of the solution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s0869601.png" /> relative not to all but only to certain variables <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s0869602.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s0869603.png" />, of a system of ordinary differential equations
+
<!--
 +
s0869601.png
 +
$#A+1 = 108 n = 0
 +
$#C+1 = 108 : ~/encyclopedia/old_files/data/S086/S.0806960 Stability for a part of the variables
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s0869604.png" /></td> <td valign="top" style="width:5%;text-align:right;">(1)</td></tr></table>
+
{{TEX|auto}}
 +
{{TEX|done}}
  
Here <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s0869605.png" /> are given real-valued continuous functions, satisfying in the domain
+
[[Lyapunov stability|Lyapunov stability]] of the solution  $  x = 0 $
 +
relative not to all but only to certain variables  $  x _ {1} \dots x _ {k} $,
 +
$  k < n $,  
 +
of a system of ordinary differential equations
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s0869606.png" /></td> <td valign="top" style="width:5%;text-align:right;">(2)</td></tr></table>
+
$$ \tag{1 }
 +
\dot{x} _ {s}  = X _ {s} ( t, x _ {1} \dots x _ {n} ),\ \
 +
s = 1 \dots n.
 +
$$
  
the conditions for the existence and uniqueness of the solution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s0869607.png" />; moreover,
+
Here  $  X _ {s} ( t, x) $
 +
are given real-valued continuous functions, satisfying in the domain
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s0869608.png" /></td> </tr></table>
+
$$ \tag{2 }
 +
t  \geq  0,\ \
 +
\sum _ {i = 1 } ^ { k }
 +
x _ {i}  ^ {2}  \leq  \textrm{ const } ,\ \
 +
\sum _ {j = k + 1 } ^ { n }
 +
x _ {j}  ^ {2}  < \infty
 +
$$
  
and any solution is defined for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s0869609.png" /> for which <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696010.png" />.
+
the conditions for the existence and uniqueness of the solution $  x ( t;  t _ {0} , x _ {0} ) $;
 +
moreover,
  
Put <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696011.png" /> for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696012.png" />; <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696013.png" /> for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696014.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696015.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696016.png" />; let
+
$$
 +
X _ {s} ( t, 0)  \equiv  0,\  s = 1 \dots n,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696017.png" /></td> </tr></table>
+
and any solution is defined for all  $  t \geq  t _ {0} \geq  0 $
 +
for which  $  \sum _ {i = 1 }  ^ {k} x _ {i}  ^ {2} \leq  H $.
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696018.png" /></td> </tr></table>
+
Put  $  x _ {i} = y _ {i} $
 +
for  $  i = 1 \dots k $;  
 +
$  x _ {k + j }  = z _ {j} $
 +
for  $  j = 1 \dots m $,
 +
$  n = k + m $
 +
and  $  m \geq  0 $;
 +
let
  
The solution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696019.png" /> of the system (1) is called: a) stable relative to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696020.png" /> or <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696022.png" />-stable if
+
$$
 +
\| y \|  = \
 +
\left ( \sum _ {i = 1 } ^ { k }  y _ {i}  ^ {2} \right ) ^ {1/2} ,\ \
 +
\| z \|  = \
 +
\left ( \sum _ {j = 1 } ^ { m }  z _ {j}  ^ {2} \right )  ^ {1/2} ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696023.png" /></td> </tr></table>
+
$$
 +
\| x \|  = \left ( \sum _ {s = 1 } ^ { n }  x _ {s}  ^ {2} \right )  ^ {1/2} .
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696024.png" /></td> </tr></table>
+
The solution  $  x = 0 $
 +
of the system (1) is called: a) stable relative to  $  x _ {1} \dots x _ {k} $
 +
or  $  y $-
 +
stable if
  
i.e. for any given numbers <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696025.png" /> (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696026.png" />) and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696027.png" /> one can find a number <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696028.png" /> such that for every perturbation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696029.png" /> satisfying the condition <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696030.png" /> and for every <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696031.png" /> the solution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696032.png" /> satisfies the condition <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696033.png" />;
+
$$
 +
( \forall \epsilon > 0)
 +
( \forall t _ {0} \in I)
 +
( \exists \delta > 0)
 +
( \forall x _ {0} \in B _  \delta  )
 +
( \forall t \in J  ^ {+} ):
 +
$$
  
b) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696035.png" />-unstable in the opposite case, i.e. if
+
$$
 +
\| y ( t;  t _ {0} , x _ {0} ) \|  < \epsilon ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696036.png" /></td> </tr></table>
+
i.e. for any given numbers  $  \epsilon > 0 $(
 +
$  \epsilon < H $)
 +
and  $  t _ {0} \geq  0 $
 +
one can find a number  $  \delta ( \epsilon , t _ {0} ) > 0 $
 +
such that for every perturbation  $  x _ {0} $
 +
satisfying the condition  $  \| x _ {0} \| \leq  \delta $
 +
and for every  $  t > t _ {0} $
 +
the solution  $  y ( t;  t _ {0} , x _ {0} ) $
 +
satisfies the condition  $  \| y \| < \epsilon $;
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696037.png" /></td> </tr></table>
+
b)  $  y $-
 +
unstable in the opposite case, i.e. if
  
c) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696040.png" />-stable uniformly in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696041.png" /> if in definition a) for every <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696042.png" /> the number <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696043.png" /> may be chosen independently of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696044.png" />;
+
$$
 +
( \exists \epsilon > 0)
 +
( \exists t _ {0} \in I)
 +
( \forall \delta > 0)
 +
( \exists x _ {0} \in B _  \delta  )
 +
( \exists t \in J  ^ {+} ):
 +
$$
  
d) asymptotically <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696046.png" />-stable if it is <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696047.png" />-stable and if for every <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696048.png" /> there exists a <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696049.png" /> such that
+
$$
 +
\| y ( t;  t _ {0} , x _ {0} ) \|  \geq  \epsilon ;
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696050.png" /></td> </tr></table>
+
c)  $  y $-
 +
stable uniformly in  $  t _ {0} $
 +
if in definition a) for every  $  \epsilon > 0 $
 +
the number  $  \delta ( \epsilon ) $
 +
may be chosen independently of  $  t _ {0} $;
  
Here <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696051.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696052.png" /> is the maximal right interval on which <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696053.png" /> is defined, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696054.png" />; in case d), besides the conditions stated above it is assumed that all solutions of the system (1) exist on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696055.png" />.
+
d) asymptotically  $  y $-
 +
stable if it is $  y $-
 +
stable and if for every  $  t _ {0} \geq  0 $
 +
there exists a  $  \delta _ {1} ( t _ {0} ) > 0 $
 +
such that
  
The statement of the problem of stability for a part of the variables was given by A.M. Lyapunov [[#References|[1]]] as a generalization of the stability problem with respect to all variables <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696056.png" />. For a solution of this problem it is particularly effective to apply the method of Lyapunov functions, suitably modified (cf. [[#References|[2]]], and [[Lyapunov function|Lyapunov function]]) for the problem of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696057.png" />-stability. At the basis of this method there are a number of theorems generalizing the classical theorem of Lyapunov.
+
$$
 +
\lim\limits _ {t \rightarrow \infty }  \| y ( t;  t _ {0} , x _ {0} ) \| =  0 \ \
 +
\textrm{ for }  \| x _ {0} \| \leq  \delta _ {1} .
 +
$$
  
Consider a real-valued function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696058.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696059.png" />, and at the same time its total derivative with respect to time, using (1):
+
Here  $  I = [ 0, \infty ) $,
 +
$  J  ^ {+} $
 +
is the maximal right interval on which  $  x ( t;  t _ {0} , x _ {0} ) $
 +
is defined,  $  B _  \delta  = \{ {x \in \mathbf R  ^ {n} } : {\| x \| < \delta } \} $;
 +
in case d), besides the conditions stated above it is assumed that all solutions of the system (1) exist on  $  [ t _ {0} , \infty ) $.
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696060.png" /></td> </tr></table>
+
The statement of the problem of stability for a part of the variables was given by A.M. Lyapunov [[#References|[1]]] as a generalization of the stability problem with respect to all variables  $  ( k = n) $.
 +
For a solution of this problem it is particularly effective to apply the method of Lyapunov functions, suitably modified (cf. [[#References|[2]]], and [[Lyapunov function|Lyapunov function]]) for the problem of  $  y $-
 +
stability. At the basis of this method there are a number of theorems generalizing the classical theorem of Lyapunov.
  
A function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696061.png" /> of constant sign is called <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696063.png" />-sign-definite if there exists a positive-definite function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696064.png" /> such that in the region (2),
+
Consider a real-valued function $  V ( t, x) \in C  ^ {1} $,
 +
$  V ( t, 0) = 0 $,
 +
and at the same time its total derivative with respect to time, using (1):
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696065.png" /></td> </tr></table>
+
$$
 +
\dot{V}  = \
  
A bounded function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696066.png" /> is said to admit an infinitesimal upper bound for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696067.png" /> if for every <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696068.png" /> there exists a <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696069.png" /> such that
+
\frac{\partial  V }{\partial  t }
 +
+
 +
\sum _ {s = 1 } ^ { n }
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696070.png" /></td> </tr></table>
+
\frac{\partial  V }{\partial  x _ {s} }
  
for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696071.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696072.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696073.png" />.
+
X _ {s} .
 +
$$
 +
 
 +
A function  $  V ( t, x) $
 +
of constant sign is called  $  y $-
 +
sign-definite if there exists a positive-definite function  $  W ( y) $
 +
such that in the region (2),
 +
 
 +
$$
 +
V ( t, x)  \geq  W ( y) \ \
 +
\textrm{ or } \ \
 +
- V ( t, x)  \geq  W ( y).
 +
$$
 +
 
 +
A bounded function  $  V ( t, x) $
 +
is said to admit an infinitesimal upper bound for  $  x _ {1} \dots x _ {p} $
 +
if for every  $  l > 0 $
 +
there exists a  $  \lambda ( l) $
 +
such that
 +
 
 +
$$
 +
| V ( t, x) |  < l
 +
$$
 +
 
 +
for  $  t \geq  0 $,
 +
$  \sum _ {i = 1 }  ^ {p} x _ {i}  ^ {2} < \lambda $,  
 +
$  - \infty < x _ {p + 1 }  \dots x _ {n} < \infty $.
  
 
===Theorem 1.===
 
===Theorem 1.===
If the system (1) is such that there exists a <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696074.png" />-positive-definite function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696075.png" /> with derivative <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696076.png" />, then the solution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696077.png" /> is <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696078.png" />-stable.
+
If the system (1) is such that there exists a $  y $-
 +
positive-definite function $  V ( t, x) $
 +
with derivative $  \dot{V} \leq  0 $,  
 +
then the solution $  x = 0 $
 +
is $  y $-
 +
stable.
  
 
===Theorem 2.===
 
===Theorem 2.===
If the conditions of theorem 1 are fulfilled and if, moreover, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696079.png" /> admits an infinitesimal upper bound for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696080.png" />, then the solution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696081.png" /> of the system (1) is <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696082.png" />-stable uniformly in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696083.png" />.
+
If the conditions of theorem 1 are fulfilled and if, moreover, $  V $
 +
admits an infinitesimal upper bound for $  x $,  
 +
then the solution $  x = 0 $
 +
of the system (1) is $  y $-
 +
stable uniformly in $  t _ {0} $.
  
 
===Theorem 3.===
 
===Theorem 3.===
If the conditions of theorem 1 are fulfilled and if, moreover, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696084.png" /> admits an infinitesimal upper bound for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696085.png" />, then for any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696086.png" /> one can find a <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696087.png" /> such that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696088.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696089.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696090.png" /> implies the inequality
+
If the conditions of theorem 1 are fulfilled and if, moreover, $  V $
 +
admits an infinitesimal upper bound for $  y $,  
 +
then for any $  \epsilon > 0 $
 +
one can find a $  \delta _ {2} ( \epsilon ) > 0 $
 +
such that $  t _ {0} \geq  0 $,  
 +
$  \| y _ {0} \| \leq  \delta _ {2} $,  
 +
$  0 \leq  \| z _ {0} \| < \infty $
 +
implies the inequality
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696091.png" /></td> </tr></table>
+
$$
 +
\| y ( t; t _ {0} , x _ {0} ) \|  < \
 +
\epsilon \  \textrm{ for }  \textrm{ all }  t \geq  t _ {0} .
 +
$$
  
 
===Theorem 4.===
 
===Theorem 4.===
If the system (1) is such that there exists a <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696092.png" />-positive-definite function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696093.png" /> admitting an infinitesimal upper bound for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696094.png" /> (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696095.png" />) and with negative-definite derivative <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696096.png" /> for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696097.png" />, then the solution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696098.png" /> of the system (1) is asymptotically <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s08696099.png" />-stable.
+
If the system (1) is such that there exists a $  y $-
 +
positive-definite function $  V $
 +
admitting an infinitesimal upper bound for $  x _ {1} \dots x _ {p} $(
 +
$  k \leq  p \leq  n $)  
 +
and with negative-definite derivative $  \dot{V} $
 +
for $  x _ {1} \dots x _ {p} $,  
 +
then the solution $  x = 0 $
 +
of the system (1) is asymptotically $  y $-
 +
stable.
  
For the study of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s086960100.png" />-instability, Chetaev's instability theorem (cf. [[Chetaev function|Chetaev function]]) has been successfully applied, as well as certain other theorems. Conditions for the converse of a number of theorems on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s086960101.png" />-stability have been established; for example, the converses of theorems 1, 2 as well as of theorem 4 for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s086960102.png" />. Methods of differential inequalities and Lyapunov vector functions have been applied to establish theorems on asymptotic <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s086960103.png" />-stability in the large, on first-order approximations, etc. (cf. [[#References|[3]]], ).
+
For the study of $  y $-
 +
instability, Chetaev's instability theorem (cf. [[Chetaev function|Chetaev function]]) has been successfully applied, as well as certain other theorems. Conditions for the converse of a number of theorems on $  y $-
 +
stability have been established; for example, the converses of theorems 1, 2 as well as of theorem 4 for $  p = k $.  
 +
Methods of differential inequalities and Lyapunov vector functions have been applied to establish theorems on asymptotic $  y $-
 +
stability in the large, on first-order approximations, etc. (cf. [[#References|[3]]], ).
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  A.M. Lyapunov,  ''Mat. Sb.'' , '''17''' :  2  (1893)  pp. 253–333</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  V.V. Rumyantsev,  "On stability of motion for a part of the variables"  ''Vestn. Moskov. Univ. Ser. Mat. Mekh. Astron. Fiz. Khim.'' :  4  (1957)  pp. 9–16  (In Russian)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  A.S. Oziraner,  V.V. Rumyantsev,  "The method of Lyapunov functions in the stability problem for motion with respect to a part of the variables"  ''J. Appl. Math. Mech.'' , '''36'''  (1972)  pp. 341–362  ''Prikl. Mat. i Mekh.'' , '''36''' :  2  (1972)  pp. 364–384</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  A.M. Lyapunov,  ''Mat. Sb.'' , '''17''' :  2  (1893)  pp. 253–333</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  V.V. Rumyantsev,  "On stability of motion for a part of the variables"  ''Vestn. Moskov. Univ. Ser. Mat. Mekh. Astron. Fiz. Khim.'' :  4  (1957)  pp. 9–16  (In Russian)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  A.S. Oziraner,  V.V. Rumyantsev,  "The method of Lyapunov functions in the stability problem for motion with respect to a part of the variables"  ''J. Appl. Math. Mech.'' , '''36'''  (1972)  pp. 341–362  ''Prikl. Mat. i Mekh.'' , '''36''' :  2  (1972)  pp. 364–384</TD></TR></table>
 
 
  
 
====Comments====
 
====Comments====
Stability for a part of the variables is also called partial stability and occasionally conditional stability, [[#References|[a1]]]. However, the latter phrase is also used in a different meaning: Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s086960104.png" /> be a class of trajectories, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s086960105.png" /> a trajectory in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s086960106.png" />. This trajectory is stable relative to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s086960107.png" /> if for a given <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s086960108.png" /> there exists a <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s086960109.png" /> such that for each trajectory <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s086960110.png" /> one has that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s086960111.png" /> implies <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s086960112.png" />. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s086960113.png" /> is not the class of all trajectories, such a <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s086/s086960/s086960114.png" /> is called conditionally stable, [[#References|[a2]]].
+
Stability for a part of the variables is also called partial stability and occasionally conditional stability, [[#References|[a1]]]. However, the latter phrase is also used in a different meaning: Let $  C $
 +
be a class of trajectories, $  x ( t ;  t _ {0} , x _ {0} ) $
 +
a trajectory in $  C $.  
 +
This trajectory is stable relative to $  C $
 +
if for a given $  \epsilon > 0 $
 +
there exists a $  \delta > 0 $
 +
such that for each trajectory $  \widetilde{x}  ( t ;  t _ {0} , \widetilde{x}  {} _ {0} ) $
 +
one has that $  \| x _ {0} - \widetilde{x}  {} _ {0} \| \leq  \delta $
 +
implies $  \| x( t ;  t _ {0} , x _ {0} ) - \widetilde{x}  ( t ;  t _ {0} , \widetilde{x}  {} _ {0} ) \| \leq  \epsilon $.  
 +
If $  C $
 +
is not the class of all trajectories, such a $  x ( t;  t _ {0} , x _ {0} ) $
 +
is called conditionally stable, [[#References|[a2]]].
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  W. Hahn,  "Stability of motion" , Springer  (1965)  pp. §55</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  S. Lefshetz,  "Differential equations: geometric theory" , Dover, reprint  (1977)  pp. 78, 83</TD></TR></table>
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  W. Hahn,  "Stability of motion" , Springer  (1965)  pp. §55</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  S. Lefshetz,  "Differential equations: geometric theory" , Dover, reprint  (1977)  pp. 78, 83</TD></TR></table>

Latest revision as of 08:22, 6 June 2020


Lyapunov stability of the solution $ x = 0 $ relative not to all but only to certain variables $ x _ {1} \dots x _ {k} $, $ k < n $, of a system of ordinary differential equations

$$ \tag{1 } \dot{x} _ {s} = X _ {s} ( t, x _ {1} \dots x _ {n} ),\ \ s = 1 \dots n. $$

Here $ X _ {s} ( t, x) $ are given real-valued continuous functions, satisfying in the domain

$$ \tag{2 } t \geq 0,\ \ \sum _ {i = 1 } ^ { k } x _ {i} ^ {2} \leq \textrm{ const } ,\ \ \sum _ {j = k + 1 } ^ { n } x _ {j} ^ {2} < \infty $$

the conditions for the existence and uniqueness of the solution $ x ( t; t _ {0} , x _ {0} ) $; moreover,

$$ X _ {s} ( t, 0) \equiv 0,\ s = 1 \dots n, $$

and any solution is defined for all $ t \geq t _ {0} \geq 0 $ for which $ \sum _ {i = 1 } ^ {k} x _ {i} ^ {2} \leq H $.

Put $ x _ {i} = y _ {i} $ for $ i = 1 \dots k $; $ x _ {k + j } = z _ {j} $ for $ j = 1 \dots m $, $ n = k + m $ and $ m \geq 0 $; let

$$ \| y \| = \ \left ( \sum _ {i = 1 } ^ { k } y _ {i} ^ {2} \right ) ^ {1/2} ,\ \ \| z \| = \ \left ( \sum _ {j = 1 } ^ { m } z _ {j} ^ {2} \right ) ^ {1/2} , $$

$$ \| x \| = \left ( \sum _ {s = 1 } ^ { n } x _ {s} ^ {2} \right ) ^ {1/2} . $$

The solution $ x = 0 $ of the system (1) is called: a) stable relative to $ x _ {1} \dots x _ {k} $ or $ y $- stable if

$$ ( \forall \epsilon > 0) ( \forall t _ {0} \in I) ( \exists \delta > 0) ( \forall x _ {0} \in B _ \delta ) ( \forall t \in J ^ {+} ): $$

$$ \| y ( t; t _ {0} , x _ {0} ) \| < \epsilon , $$

i.e. for any given numbers $ \epsilon > 0 $( $ \epsilon < H $) and $ t _ {0} \geq 0 $ one can find a number $ \delta ( \epsilon , t _ {0} ) > 0 $ such that for every perturbation $ x _ {0} $ satisfying the condition $ \| x _ {0} \| \leq \delta $ and for every $ t > t _ {0} $ the solution $ y ( t; t _ {0} , x _ {0} ) $ satisfies the condition $ \| y \| < \epsilon $;

b) $ y $- unstable in the opposite case, i.e. if

$$ ( \exists \epsilon > 0) ( \exists t _ {0} \in I) ( \forall \delta > 0) ( \exists x _ {0} \in B _ \delta ) ( \exists t \in J ^ {+} ): $$

$$ \| y ( t; t _ {0} , x _ {0} ) \| \geq \epsilon ; $$

c) $ y $- stable uniformly in $ t _ {0} $ if in definition a) for every $ \epsilon > 0 $ the number $ \delta ( \epsilon ) $ may be chosen independently of $ t _ {0} $;

d) asymptotically $ y $- stable if it is $ y $- stable and if for every $ t _ {0} \geq 0 $ there exists a $ \delta _ {1} ( t _ {0} ) > 0 $ such that

$$ \lim\limits _ {t \rightarrow \infty } \| y ( t; t _ {0} , x _ {0} ) \| = 0 \ \ \textrm{ for } \| x _ {0} \| \leq \delta _ {1} . $$

Here $ I = [ 0, \infty ) $, $ J ^ {+} $ is the maximal right interval on which $ x ( t; t _ {0} , x _ {0} ) $ is defined, $ B _ \delta = \{ {x \in \mathbf R ^ {n} } : {\| x \| < \delta } \} $; in case d), besides the conditions stated above it is assumed that all solutions of the system (1) exist on $ [ t _ {0} , \infty ) $.

The statement of the problem of stability for a part of the variables was given by A.M. Lyapunov [1] as a generalization of the stability problem with respect to all variables $ ( k = n) $. For a solution of this problem it is particularly effective to apply the method of Lyapunov functions, suitably modified (cf. [2], and Lyapunov function) for the problem of $ y $- stability. At the basis of this method there are a number of theorems generalizing the classical theorem of Lyapunov.

Consider a real-valued function $ V ( t, x) \in C ^ {1} $, $ V ( t, 0) = 0 $, and at the same time its total derivative with respect to time, using (1):

$$ \dot{V} = \ \frac{\partial V }{\partial t } + \sum _ {s = 1 } ^ { n } \frac{\partial V }{\partial x _ {s} } X _ {s} . $$

A function $ V ( t, x) $ of constant sign is called $ y $- sign-definite if there exists a positive-definite function $ W ( y) $ such that in the region (2),

$$ V ( t, x) \geq W ( y) \ \ \textrm{ or } \ \ - V ( t, x) \geq W ( y). $$

A bounded function $ V ( t, x) $ is said to admit an infinitesimal upper bound for $ x _ {1} \dots x _ {p} $ if for every $ l > 0 $ there exists a $ \lambda ( l) $ such that

$$ | V ( t, x) | < l $$

for $ t \geq 0 $, $ \sum _ {i = 1 } ^ {p} x _ {i} ^ {2} < \lambda $, $ - \infty < x _ {p + 1 } \dots x _ {n} < \infty $.

Theorem 1.

If the system (1) is such that there exists a $ y $- positive-definite function $ V ( t, x) $ with derivative $ \dot{V} \leq 0 $, then the solution $ x = 0 $ is $ y $- stable.

Theorem 2.

If the conditions of theorem 1 are fulfilled and if, moreover, $ V $ admits an infinitesimal upper bound for $ x $, then the solution $ x = 0 $ of the system (1) is $ y $- stable uniformly in $ t _ {0} $.

Theorem 3.

If the conditions of theorem 1 are fulfilled and if, moreover, $ V $ admits an infinitesimal upper bound for $ y $, then for any $ \epsilon > 0 $ one can find a $ \delta _ {2} ( \epsilon ) > 0 $ such that $ t _ {0} \geq 0 $, $ \| y _ {0} \| \leq \delta _ {2} $, $ 0 \leq \| z _ {0} \| < \infty $ implies the inequality

$$ \| y ( t; t _ {0} , x _ {0} ) \| < \ \epsilon \ \textrm{ for } \textrm{ all } t \geq t _ {0} . $$

Theorem 4.

If the system (1) is such that there exists a $ y $- positive-definite function $ V $ admitting an infinitesimal upper bound for $ x _ {1} \dots x _ {p} $( $ k \leq p \leq n $) and with negative-definite derivative $ \dot{V} $ for $ x _ {1} \dots x _ {p} $, then the solution $ x = 0 $ of the system (1) is asymptotically $ y $- stable.

For the study of $ y $- instability, Chetaev's instability theorem (cf. Chetaev function) has been successfully applied, as well as certain other theorems. Conditions for the converse of a number of theorems on $ y $- stability have been established; for example, the converses of theorems 1, 2 as well as of theorem 4 for $ p = k $. Methods of differential inequalities and Lyapunov vector functions have been applied to establish theorems on asymptotic $ y $- stability in the large, on first-order approximations, etc. (cf. [3], ).

References

[1] A.M. Lyapunov, Mat. Sb. , 17 : 2 (1893) pp. 253–333
[2] V.V. Rumyantsev, "On stability of motion for a part of the variables" Vestn. Moskov. Univ. Ser. Mat. Mekh. Astron. Fiz. Khim. : 4 (1957) pp. 9–16 (In Russian)
[3] A.S. Oziraner, V.V. Rumyantsev, "The method of Lyapunov functions in the stability problem for motion with respect to a part of the variables" J. Appl. Math. Mech. , 36 (1972) pp. 341–362 Prikl. Mat. i Mekh. , 36 : 2 (1972) pp. 364–384

Comments

Stability for a part of the variables is also called partial stability and occasionally conditional stability, [a1]. However, the latter phrase is also used in a different meaning: Let $ C $ be a class of trajectories, $ x ( t ; t _ {0} , x _ {0} ) $ a trajectory in $ C $. This trajectory is stable relative to $ C $ if for a given $ \epsilon > 0 $ there exists a $ \delta > 0 $ such that for each trajectory $ \widetilde{x} ( t ; t _ {0} , \widetilde{x} {} _ {0} ) $ one has that $ \| x _ {0} - \widetilde{x} {} _ {0} \| \leq \delta $ implies $ \| x( t ; t _ {0} , x _ {0} ) - \widetilde{x} ( t ; t _ {0} , \widetilde{x} {} _ {0} ) \| \leq \epsilon $. If $ C $ is not the class of all trajectories, such a $ x ( t; t _ {0} , x _ {0} ) $ is called conditionally stable, [a2].

References

[a1] W. Hahn, "Stability of motion" , Springer (1965) pp. §55
[a2] S. Lefshetz, "Differential equations: geometric theory" , Dover, reprint (1977) pp. 78, 83
How to Cite This Entry:
Stability for a part of the variables. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Stability_for_a_part_of_the_variables&oldid=17499
This article was adapted from an original article by V.V. Rumyantsev (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article