Namespaces
Variants
Actions

Difference between revisions of "Lyapunov equation"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (AUTOMATIC EDIT (latexlist): Replaced 54 formulas out of 54 by TEX code with an average confidence of 2.0 and a minimal confidence of 2.0.)
 
Line 1: Line 1:
 +
<!--This article has been texified automatically. Since there was no Nroff source code for this article,
 +
the semi-automatic procedure described at https://encyclopediaofmath.org/wiki/User:Maximilian_Janisch/latexlist
 +
was used.
 +
If the TeX and formula formatting is correct, please remove this message and the {{TEX|semi-auto}} category.
 +
 +
Out of 54 formulas, 54 were replaced by TEX code.-->
 +
 +
{{TEX|semi-auto}}{{TEX|done}}
 
Usually, the matrix equation
 
Usually, the matrix equation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l120/l120190/l1201901.png" /></td> <td valign="top" style="width:5%;text-align:right;">(a1)</td></tr></table>
+
\begin{equation} \tag{a1} A ^ { * } X + X A + C = 0, \end{equation}
  
where the star denotes transposition for matrices with real entries and transposition and complex conjugation for matrices with complex entries; <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l120/l120190/l1201902.png" /> is symmetric (or Hermitian in the complex case; cf. [[Hermitian matrix|Hermitian matrix]]; [[Symmetric matrix|Symmetric matrix]]). In fact, this is a special case of the matrix Sylvester equation
+
where the star denotes transposition for matrices with real entries and transposition and complex conjugation for matrices with complex entries; $C$ is symmetric (or Hermitian in the complex case; cf. [[Hermitian matrix|Hermitian matrix]]; [[Symmetric matrix|Symmetric matrix]]). In fact, this is a special case of the matrix Sylvester equation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l120/l120190/l1201903.png" /></td> <td valign="top" style="width:5%;text-align:right;">(a2)</td></tr></table>
+
\begin{equation} \tag{a2} X A + B X + C = 0. \end{equation}
  
The main result concerning the Sylvester equation is the following: If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l120/l120190/l1201904.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l120/l120190/l1201905.png" /> have no common eigenvalues, then the Sylvester equation has a unique solution for any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l120/l120190/l1201906.png" />.
+
The main result concerning the Sylvester equation is the following: If $A$ and $- B$ have no common eigenvalues, then the Sylvester equation has a unique solution for any $C$.
  
When <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l120/l120190/l1201907.png" /> and there are no eigenvalues <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l120/l120190/l1201908.png" /> of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l120/l120190/l1201909.png" /> such that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l120/l120190/l12019010.png" /> whatever <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l120/l120190/l12019011.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l120/l120190/l12019012.png" /> are (in the numbering of eigenvalues of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l120/l120190/l12019013.png" />), then (a1) has a unique Hermitian solution for any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l120/l120190/l12019014.png" />. Moreover if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l120/l120190/l12019015.png" /> is a Hurwitz matrix (i.e. having all its eigenvalues in the left half-plane, thus having strictly negative real parts), then this unique solution is
+
When $B = A ^ { * }$ and there are no eigenvalues $\lambda_j$ of $A$ such that $\lambda _ { j } + \overline { \lambda } _ { k } = 0$ whatever $j$ and $k$ are (in the numbering of eigenvalues of $A$), then (a1) has a unique Hermitian solution for any $C$. Moreover if $A$ is a Hurwitz matrix (i.e. having all its eigenvalues in the left half-plane, thus having strictly negative real parts), then this unique solution is
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l120/l120190/l12019016.png" /></td> <td valign="top" style="width:5%;text-align:right;">(a3)</td></tr></table>
+
\begin{equation} \tag{a3} x = - \int _ { 0 } ^ { \infty } e ^ { A ^ { * } t } C e ^ { A t } d t, \end{equation}
  
and if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l120/l120190/l12019017.png" />, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l120/l120190/l12019018.png" />. From this one may deduce that if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l120/l120190/l12019019.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l120/l120190/l12019020.png" /> satisfy <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l120/l120190/l12019021.png" />, than a necessary and sufficient condition for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l120/l120190/l12019022.png" /> to be a Hurwitz matrix is that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l120/l120190/l12019023.png" />. In fact, this last property justifies the assignment of Lyapunov's name to (a1); in Lyapunov's famous monograph [[#References|[a1]]], Chap. 20, Thm. 2, one finds the following result: Consider the partial differential equation
+
and if $C \leq 0$, then $X \leq 0$. From this one may deduce that if $A$ and $P$ satisfy $A ^ { * } P + P A = 0$, than a necessary and sufficient condition for $A$ to be a Hurwitz matrix is that $P &gt; 0$. In fact, this last property justifies the assignment of Lyapunov's name to (a1); in Lyapunov's famous monograph [[#References|[a1]]], Chap. 20, Thm. 2, one finds the following result: Consider the partial differential equation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l120/l120190/l12019024.png" /></td> <td valign="top" style="width:5%;text-align:right;">(a4)</td></tr></table>
+
\begin{equation} \tag{a4} \sum _ { i = 1 } ^ { m } \left( \sum _ { j = 1 } ^ { m } a _ { i j } x _ { j } \right) \frac { \partial _ { v } } { \partial x _ { i } } = U. \end{equation}
  
If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l120/l120190/l12019025.png" /> has eigenvalues with strictly negative real parts and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l120/l120190/l12019026.png" /> is a form of definite sign and even degree, then the solution, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l120/l120190/l12019027.png" />, of this equation will be a form of the same degree that is sign definite (with sign opposite to that of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l120/l120190/l12019028.png" />. Now, if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l120/l120190/l12019029.png" /> with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l120/l120190/l12019030.png" />, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l120/l120190/l12019031.png" />, with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l120/l120190/l12019032.png" />, is a solution of (a1). In fact, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l120/l120190/l12019033.png" /> is a [[Lyapunov function|Lyapunov function]] for the system
+
If $A$ has eigenvalues with strictly negative real parts and $U$ is a form of definite sign and even degree, then the solution, $V$, of this equation will be a form of the same degree that is sign definite (with sign opposite to that of $U$. Now, if $U = - x ^ { * } C x &lt; 0$ with $x = \operatorname { col } ( x _ { 1 } \ldots x _ { n } )$, then $V = x ^ { * } P x$, with $P &gt; 0$, is a solution of (a1). In fact, $V$ is a [[Lyapunov function|Lyapunov function]] for the system
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l120/l120190/l12019034.png" /></td> <td valign="top" style="width:5%;text-align:right;">(a5)</td></tr></table>
+
\begin{equation} \tag{a5} \dot { x } = A x. \end{equation}
  
 
These facts and results have a straightforward extension to the discrete-time case: for the system
 
These facts and results have a straightforward extension to the discrete-time case: for the system
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l120/l120190/l12019035.png" /></td> <td valign="top" style="width:5%;text-align:right;">(a6)</td></tr></table>
+
\begin{equation} \tag{a6} x _ { k + 1 } = A x _ { k } \end{equation}
  
one may consider the quadratic Lyapunov function as above (i.e. <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l120/l120190/l12019036.png" />) and obtain that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l120/l120190/l12019037.png" /> has to be a solution of the discrete-time Lyapunov equation
+
one may consider the quadratic Lyapunov function as above (i.e. $V = x ^ { * } P x$) and obtain that $P$ has to be a solution of the discrete-time Lyapunov equation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l120/l120190/l12019038.png" /></td> <td valign="top" style="width:5%;text-align:right;">(a7)</td></tr></table>
+
\begin{equation} \tag{a7} A ^ { * } X A - X + C = 0, \end{equation}
  
 
whose solution has the form
 
whose solution has the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l120/l120190/l12019039.png" /></td> <td valign="top" style="width:5%;text-align:right;">(a8)</td></tr></table>
+
\begin{equation} \tag{a8} x = - \sum _ { k = 0 } ^ { \infty } ( A ^ { * } ) ^ { k } C ( A ) ^ { k } \end{equation}
  
provided the eigenvalues of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l120/l120190/l12019040.png" /> are inside the unit disc.
+
provided the eigenvalues of $A$ are inside the unit disc.
  
 
The equation may be defined for the time-varying case also. For the system
 
The equation may be defined for the time-varying case also. For the system
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l120/l120190/l12019041.png" /></td> <td valign="top" style="width:5%;text-align:right;">(a9)</td></tr></table>
+
\begin{equation} \tag{a9} \dot { x } = A ( t ) x \end{equation}
  
one may consider the quadratic Lyapunov function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l120/l120190/l12019042.png" /> and obtain that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l120/l120190/l12019043.png" /> has to be the unique solution, bounded on the whole real axis, of the matrix differential equation
+
one may consider the quadratic Lyapunov function $V ( t , x ) = x ^ { * } P ( t ) x$ and obtain that $P ( t )$ has to be the unique solution, bounded on the whole real axis, of the matrix differential equation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l120/l120190/l12019044.png" /></td> <td valign="top" style="width:5%;text-align:right;">(a10)</td></tr></table>
+
\begin{equation} \tag{a10} \dot{X} + A ^ { * } ( t ) X + X A ( t ) + C ( t ) = 0. \end{equation}
  
 
This solution is
 
This solution is
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l120/l120190/l12019045.png" /></td> <td valign="top" style="width:5%;text-align:right;">(a11)</td></tr></table>
+
\begin{equation} \tag{a11} X = - \int _ { - \infty } ^ { t } X _ { A } ( t , z ) C ( z ) X _ { A } ( t , z ) \,d z, \end{equation}
  
<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l120/l120190/l12019046.png" /> being the matrix solution of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l120/l120190/l12019047.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l120/l120190/l12019048.png" />. The solution is well defined if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l120/l120190/l12019049.png" /> defines an exponentially stable evolution (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l120/l120190/l12019050.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l120/l120190/l12019051.png" />). It is worth mentioning that if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l120/l120190/l12019052.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l120/l120190/l12019053.png" /> are periodic or almost periodic, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l120/l120190/l12019054.png" /> defined by (a11) is periodic or almost periodic, respectively. Extensions of this result to a discrete-time or infinite dimensional (operator) case are widely known. Actually, the Lyapunov equation has many applications in stability and control theory; efficient numerical algorithms for solving it are available.
+
$X _ { A } ( t , z )$ being the matrix solution of $\dot { X } = A ( t ) X$, $- X _ { A } ( z , z ) = I$. The solution is well defined if $A ( t )$ defines an exponentially stable evolution ($| X _ { A } ( t , z ) | \leq \beta  e ^ { - \alpha ( t - z ) }$, $\alpha , \beta &gt; 0$). It is worth mentioning that if $A ( t )$ and $C ( t )$ are periodic or almost periodic, then $X ( t )$ defined by (a11) is periodic or almost periodic, respectively. Extensions of this result to a discrete-time or infinite dimensional (operator) case are widely known. Actually, the Lyapunov equation has many applications in stability and control theory; efficient numerical algorithms for solving it are available.
  
 
====References====
 
====References====
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  A.M. Lyapunov,  "General problem of stability of motion" , USSR Acad. Publ. House  (1950)  (In Russian)</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  R.E. Bellman,  "Introduction to matrix-analysis" , McGraw-Hill  (1960)</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top">  A. Halanay,  "Differential equations: stability, oscillations time lags" , Acad. Press  (1966)</TD></TR><TR><TD valign="top">[a4]</TD> <TD valign="top">  A. Halanay,  D. Wexler,  "Qualitative theory of pulse systems" , Nauka  (1971)  (In Russian)</TD></TR><TR><TD valign="top">[a5]</TD> <TD valign="top">  A. Halanay,  V. Räsvan,  "Applications of Lyapunov methods in stability" , Kluwer Acad. Publ.  (1993)</TD></TR></table>
+
<table><tr><td valign="top">[a1]</td> <td valign="top">  A.M. Lyapunov,  "General problem of stability of motion" , USSR Acad. Publ. House  (1950)  (In Russian)</td></tr><tr><td valign="top">[a2]</td> <td valign="top">  R.E. Bellman,  "Introduction to matrix-analysis" , McGraw-Hill  (1960)</td></tr><tr><td valign="top">[a3]</td> <td valign="top">  A. Halanay,  "Differential equations: stability, oscillations time lags" , Acad. Press  (1966)</td></tr><tr><td valign="top">[a4]</td> <td valign="top">  A. Halanay,  D. Wexler,  "Qualitative theory of pulse systems" , Nauka  (1971)  (In Russian)</td></tr><tr><td valign="top">[a5]</td> <td valign="top">  A. Halanay,  V. Räsvan,  "Applications of Lyapunov methods in stability" , Kluwer Acad. Publ.  (1993)</td></tr></table>

Latest revision as of 16:52, 1 July 2020

Usually, the matrix equation

\begin{equation} \tag{a1} A ^ { * } X + X A + C = 0, \end{equation}

where the star denotes transposition for matrices with real entries and transposition and complex conjugation for matrices with complex entries; $C$ is symmetric (or Hermitian in the complex case; cf. Hermitian matrix; Symmetric matrix). In fact, this is a special case of the matrix Sylvester equation

\begin{equation} \tag{a2} X A + B X + C = 0. \end{equation}

The main result concerning the Sylvester equation is the following: If $A$ and $- B$ have no common eigenvalues, then the Sylvester equation has a unique solution for any $C$.

When $B = A ^ { * }$ and there are no eigenvalues $\lambda_j$ of $A$ such that $\lambda _ { j } + \overline { \lambda } _ { k } = 0$ whatever $j$ and $k$ are (in the numbering of eigenvalues of $A$), then (a1) has a unique Hermitian solution for any $C$. Moreover if $A$ is a Hurwitz matrix (i.e. having all its eigenvalues in the left half-plane, thus having strictly negative real parts), then this unique solution is

\begin{equation} \tag{a3} x = - \int _ { 0 } ^ { \infty } e ^ { A ^ { * } t } C e ^ { A t } d t, \end{equation}

and if $C \leq 0$, then $X \leq 0$. From this one may deduce that if $A$ and $P$ satisfy $A ^ { * } P + P A = 0$, than a necessary and sufficient condition for $A$ to be a Hurwitz matrix is that $P > 0$. In fact, this last property justifies the assignment of Lyapunov's name to (a1); in Lyapunov's famous monograph [a1], Chap. 20, Thm. 2, one finds the following result: Consider the partial differential equation

\begin{equation} \tag{a4} \sum _ { i = 1 } ^ { m } \left( \sum _ { j = 1 } ^ { m } a _ { i j } x _ { j } \right) \frac { \partial _ { v } } { \partial x _ { i } } = U. \end{equation}

If $A$ has eigenvalues with strictly negative real parts and $U$ is a form of definite sign and even degree, then the solution, $V$, of this equation will be a form of the same degree that is sign definite (with sign opposite to that of $U$. Now, if $U = - x ^ { * } C x < 0$ with $x = \operatorname { col } ( x _ { 1 } \ldots x _ { n } )$, then $V = x ^ { * } P x$, with $P > 0$, is a solution of (a1). In fact, $V$ is a Lyapunov function for the system

\begin{equation} \tag{a5} \dot { x } = A x. \end{equation}

These facts and results have a straightforward extension to the discrete-time case: for the system

\begin{equation} \tag{a6} x _ { k + 1 } = A x _ { k } \end{equation}

one may consider the quadratic Lyapunov function as above (i.e. $V = x ^ { * } P x$) and obtain that $P$ has to be a solution of the discrete-time Lyapunov equation

\begin{equation} \tag{a7} A ^ { * } X A - X + C = 0, \end{equation}

whose solution has the form

\begin{equation} \tag{a8} x = - \sum _ { k = 0 } ^ { \infty } ( A ^ { * } ) ^ { k } C ( A ) ^ { k } \end{equation}

provided the eigenvalues of $A$ are inside the unit disc.

The equation may be defined for the time-varying case also. For the system

\begin{equation} \tag{a9} \dot { x } = A ( t ) x \end{equation}

one may consider the quadratic Lyapunov function $V ( t , x ) = x ^ { * } P ( t ) x$ and obtain that $P ( t )$ has to be the unique solution, bounded on the whole real axis, of the matrix differential equation

\begin{equation} \tag{a10} \dot{X} + A ^ { * } ( t ) X + X A ( t ) + C ( t ) = 0. \end{equation}

This solution is

\begin{equation} \tag{a11} X = - \int _ { - \infty } ^ { t } X _ { A } ( t , z ) C ( z ) X _ { A } ( t , z ) \,d z, \end{equation}

$X _ { A } ( t , z )$ being the matrix solution of $\dot { X } = A ( t ) X$, $- X _ { A } ( z , z ) = I$. The solution is well defined if $A ( t )$ defines an exponentially stable evolution ($| X _ { A } ( t , z ) | \leq \beta e ^ { - \alpha ( t - z ) }$, $\alpha , \beta > 0$). It is worth mentioning that if $A ( t )$ and $C ( t )$ are periodic or almost periodic, then $X ( t )$ defined by (a11) is periodic or almost periodic, respectively. Extensions of this result to a discrete-time or infinite dimensional (operator) case are widely known. Actually, the Lyapunov equation has many applications in stability and control theory; efficient numerical algorithms for solving it are available.

References

[a1] A.M. Lyapunov, "General problem of stability of motion" , USSR Acad. Publ. House (1950) (In Russian)
[a2] R.E. Bellman, "Introduction to matrix-analysis" , McGraw-Hill (1960)
[a3] A. Halanay, "Differential equations: stability, oscillations time lags" , Acad. Press (1966)
[a4] A. Halanay, D. Wexler, "Qualitative theory of pulse systems" , Nauka (1971) (In Russian)
[a5] A. Halanay, V. Räsvan, "Applications of Lyapunov methods in stability" , Kluwer Acad. Publ. (1993)
How to Cite This Entry:
Lyapunov equation. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Lyapunov_equation&oldid=15487
This article was adapted from an original article by Vladimir Räsvan (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article