Namespaces
Variants
Actions

Difference between revisions of "Cayley-Hamilton theorem"

From Encyclopedia of Mathematics
Jump to: navigation, search
m (AUTOMATIC EDIT (latexlist): Replaced 122 formulas out of 122 by TEX code with an average confidence of 2.0 and a minimal confidence of 2.0.)
 
Line 1: Line 1:
Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c1200801.png" /> be the set of complex <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c1200802.png" />-matrices and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c1200803.png" />. Let
+
<!--This article has been texified automatically. Since there was no Nroff source code for this article,
 +
the semi-automatic procedure described at https://encyclopediaofmath.org/wiki/User:Maximilian_Janisch/latexlist
 +
was used.
 +
If the TeX and formula formatting is correct, please remove this message and the {{TEX|semi-auto}} category.
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c1200804.png" /></td> </tr></table>
+
Out of 122 formulas, 122 were replaced by TEX code.-->
  
be the characteristic polynomial of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c1200805.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c1200806.png" /> is the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c1200807.png" /> identity matrix. The Cayley–Hamilton theorem says [[#References|[a2]]], [[#References|[a9]]] that every square matrix satisfies its own [[Characteristic equation|characteristic equation]], i.e.
+
{{TEX|semi-auto}}{{TEX|done}}
 +
Let $C ^ { n  \times m}$ be the set of complex $( n \times m )$-matrices and $A \in C ^ { n \times n }$. Let
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c1200808.png" /></td> </tr></table>
+
\begin{equation*} \varphi ( s ) = \operatorname { det } [ I _ { n } \lambda - A ] = \sum _ { i = 0 } ^ { n } a _ { i } \lambda ^ { i } ( a _ { n } = 1 ) \end{equation*}
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c1200809.png" /> is the zero-matrix.
+
be the characteristic polynomial of $A$, where $ { I } _ { n }$ is the $( n \times n )$ identity matrix. The Cayley–Hamilton theorem says [[#References|[a2]]], [[#References|[a9]]] that every square matrix satisfies its own [[Characteristic equation|characteristic equation]], i.e.
  
The classical Cayley–Hamilton theorem can be extended to rectangle matrices. A matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008010.png" /> for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008011.png" /> may be written as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008012.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008013.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008014.png" />. Let
+
\begin{equation*} \varphi ( A ) = \sum _ { i = 0 } ^ { n } a _ { i } A ^ { i } = 0, \end{equation*}
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008015.png" /></td> </tr></table>
+
where $0$ is the zero-matrix.
  
Then the matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008016.png" /> (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008017.png" />) satisfies the equation [[#References|[a8]]]
+
The classical Cayley–Hamilton theorem can be extended to rectangle matrices. A matrix $A \in C ^ { m \times n }$ for $n &gt; m$ may be written as $A = [ A _ { 1 } , A _ { 2 } ]$, $A _ { 1 } \in C ^ { m \times m }$, $A _ { 2 } \in C ^ { m \times ( n - m ) }$. Let
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008018.png" /></td> </tr></table>
+
\begin{equation*} \operatorname { det } [ I _ { n } \lambda - A _ { 1 } ] = \sum _ { i = 0 } ^ { m } a _ { i } \lambda ^ { i } ( a _ { m } = 1 ). \end{equation*}
  
A matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008019.png" /> (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008020.png" />) may be written as
+
Then the matrix $A \in C ^ { m \times n }$ ($n &gt; m$) satisfies the equation [[#References|[a8]]]
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008021.png" /></td> </tr></table>
+
\begin{equation*} \sum _ { i = 0 } ^ { m } a _ { m - i } [ A _ { 1 } ^ { m - i } , A _ { 1 } ^ { n - i - 1 } A _ { 2 } ] = 0. \end{equation*}
 +
 
 +
A matrix $A \in C ^ { m \times n }$ ($m &gt; n$) may be written as
 +
 
 +
\begin{equation*} A = \left[ \begin{array} { l } { A _ { 1 } } \\ { A _ { 2 } } \end{array} \right] , \quad A _ { 1 } \in C ^ { n \times n } , A _ { 2 } \in C ^ { ( m - n ) \times n }. \end{equation*}
  
 
Let
 
Let
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008022.png" /></td> </tr></table>
+
\begin{equation*} \operatorname { det } [ I _ { n } \lambda - A _ { 1 } ] = \sum _ { i = 0 } ^ { n } a _ { i } \lambda ^ { i } ( a _ { n } = 1 ). \end{equation*}
  
Then the matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008023.png" /> (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008024.png" />) satisfies the equation [[#References|[a8]]]
+
Then the matrix $A \in C ^ { m \times n }$ ($m &gt; n$) satisfies the equation [[#References|[a8]]]
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008025.png" /></td> </tr></table>
+
\begin{equation*} \sum _ { i = 0 } ^ { n } a _ { n - 1 } \left[ \begin{array} { c } { A _ { 1 } ^ { m - i } } \\ { A _ { 2 } A _ { 1 } ^ { m - i - 1 } } \end{array} \right] = 0 _ { m n }. \end{equation*}
  
 
The Cayley–Hamilton theorem can be also extended to block matrices ([[#References|[a4]]], [[#References|[a13]]], [[#References|[a15]]]). Let
 
The Cayley–Hamilton theorem can be also extended to block matrices ([[#References|[a4]]], [[#References|[a13]]], [[#References|[a15]]]). Let
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008026.png" /></td> <td valign="top" style="width:5%;text-align:right;">(a1)</td></tr></table>
+
\begin{equation} \tag{a1} A _ { 1 } = \left[ \begin{array} { c c c } { A _ { 11 } } &amp; { \dots } &amp; { A _ { 1 m } } \\ { \dots } &amp; { \dots } &amp; { \dots } \\ { A _ { m 1 } } &amp; { \dots } &amp; { A _ { m m } } \end{array} \right] \in C ^ { m n \times m n }, \end{equation}
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008027.png" /> are commutative, i.e. <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008028.png" /> for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008029.png" />. Let
+
where $A _ {i j } \in C ^ { n \times n }$ are commutative, i.e. $A _ { i j } A _ { k l } = A _ { k l } A _ { i j }$ for all $i , j , k = 1 , \dots , m$. Let
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008030.png" /></td> </tr></table>
+
\begin{equation*} \Delta ( \Lambda ) = \operatorname { Det } [ I _ { m } \bigotimes \Lambda - A _ { 1 } ] = \end{equation*}
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008031.png" /></td> </tr></table>
+
\begin{equation*} = \Lambda ^ { m } + D _ { 1 } \Lambda ^ { m - 1 } + \ldots + D _ { m - 1 } \Lambda + D _ { m } , D _ { k } \in C ^ { n \times n } , k = 1 , \ldots , m, \end{equation*}
  
be the matrix characteristic polynomial and let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008032.png" /> be the matrix (block) eigenvalue of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008033.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008034.png" /> denotes the Kronecker product. The matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008035.png" /> is obtained by developing the determinant of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008036.png" />, considering its commuting blocks as elements [[#References|[a15]]].
+
be the matrix characteristic polynomial and let $\Delta \in C ^ { n \times n }$ be the matrix (block) eigenvalue of $A _ { 1 }$, where $\otimes$ denotes the Kronecker product. The matrix $\Delta ( \Lambda )$ is obtained by developing the determinant of $[ l _ { m } \otimes \Lambda - A _ { 1 } ]$, considering its commuting blocks as elements [[#References|[a15]]].
  
 
The block matrix (a1) satisfies the equation [[#References|[a15]]]
 
The block matrix (a1) satisfies the equation [[#References|[a15]]]
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008037.png" /></td> </tr></table>
+
\begin{equation*} \Delta ( A _ { 1 } ) = \sum _ { i = 0 } ^ { m } ( I _ { m } \bigotimes D _ { m - i } ) A _ { 1 } ^ { i } = 0 ( D _ { 0 } = I _ { n } ). \end{equation*}
  
Consider now a rectangular block matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008038.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008039.png" /> has the form (a1) and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008040.png" /> (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008041.png" />). The matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008042.png" /> satisfies the equation [[#References|[a4]]]
+
Consider now a rectangular block matrix $A = [ A_{l} , A _ { 2 } ] \in C ^ { mn \times ( m n + p )}$, where $A _ { 1 }$ has the form (a1) and $A _ { 2 } \in C ^ { m n \times p }$ ($p &gt; 0$). The matrix $A$ satisfies the equation [[#References|[a4]]]
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008043.png" /></td> </tr></table>
+
\begin{equation*} \sum _ { l = 0 } ^ { m } ( I _ { m } \bigotimes D _ { m - i } ) [ A _ { 1 } ^ { i + 1 } , A _ { 1 } ^ { i } A _ { 2 } ] = 0 ( D _ { 0 } = I _ { n } ). \end{equation*}
  
If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008044.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008045.png" /> has the form (a1) and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008046.png" />, then
+
If $A = \left[ \begin{array} { l } { A _ { 1 } } \\ { A _ { 2 } } \end{array} \right] \in C ^ { ( m n + p ) \times m }$, where $A _ { 1 }$ has the form (a1) and $A _ { 2 } \in C ^ { p \times m n }$, then
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008047.png" /></td> </tr></table>
+
\begin{equation*} \sum _ { i = 0 } ^ { m } \left[ \begin{array} { l } { A _ { 1 } } \\ { A _ { 2 } } \end{array} \right] ( I _ { m } \bigotimes D _ { m - i } ) A _ { 1 } ^ { i } = 0 ( D _ { 0 } = I _ { n } ). \end{equation*}
  
A pair of matrices <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008048.png" /> is called regular if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008049.png" /> for some <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008050.png" /> [[#References|[a10]]], [[#References|[a11]]], [[#References|[a12]]]. The pair is called standard if there exist scalars <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008051.png" /> such that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008052.png" />. If the pair <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008053.png" /> is regular, then the pair
+
A pair of matrices $E , A \in C ^ { n \times n }$ is called regular if $\operatorname { det } [ E \lambda - A ] \neq 0$ for some $\lambda \in \mathbf{C}$ [[#References|[a10]]], [[#References|[a11]]], [[#References|[a12]]]. The pair is called standard if there exist scalars $\alpha , \beta \in \bf{C}$ such that $E \alpha + A \beta = I _ { n }$. If the pair $E , A \in C ^ { n \times n }$ is regular, then the pair
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008054.png" /></td> <td valign="top" style="width:5%;text-align:right;">(a2)</td></tr></table>
+
\begin{equation} \tag{a2} \overline{E} = [ E \lambda - A ] ^ { - 1 } E , \overline{A} = [ E \lambda - A ] ^ { - 1 } A \end{equation}
  
is standard. If the pair <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008055.png" /> is standard, then it is also commutative (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008056.png" />). Let a pair <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008057.png" /> be standard (commutative) and
+
is standard. If the pair $E , A \in C ^ { n \times n }$ is standard, then it is also commutative ($E A = A E$). Let a pair $E , A \in C ^ { n \times n }$ be standard (commutative) and
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008058.png" /></td> </tr></table>
+
\begin{equation*} \Delta ( \lambda , \mu ) = \operatorname { det } [ E \lambda - A \mu ] = \sum _ { i = 0 } ^ { n } a _ { i , n - i } \lambda ^ { i } \mu ^ { n - i }. \end{equation*}
  
 
Then the pair satisfies the equation [[#References|[a1]]]
 
Then the pair satisfies the equation [[#References|[a1]]]
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008059.png" /></td> </tr></table>
+
\begin{equation*} \Delta ( A , E ) = \sum _ { i = 0 } ^ { n } a _ { i , n - i }A ^ { i } E ^ { n - i } = 0. \end{equation*}
  
In a particular case, with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008060.png" />, it follows that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008061.png" />.
+
In a particular case, with $\operatorname { det } [ E \lambda - A ] = \sum _ { i = 0 } ^ { n } a _ { i } s ^ { i }$, it follows that $\sum _ { i = 0 } ^ { n } a _ { i } A ^ { i } E ^ { n - i } = 0$.
  
Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008062.png" /> be the set of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008063.png" />-order square complex matrices that commute in pairs and let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008064.png" /> be the set of square matrices partitioned in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008065.png" /> blocks belonging to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008066.png" />.
+
Let $P _ { n } ( C )$ be the set of $n$-order square complex matrices that commute in pairs and let $M _ { m } ( P _ { n } )$ be the set of square matrices partitioned in $m ^ { 2 }$ blocks belonging to $P _ { n } ( C )$.
  
Consider a standard pair of block matrices <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008067.png" /> and let the matrix polynomial
+
Consider a standard pair of block matrices $E,A \in M _ { m } ( P _ { n } )$ and let the matrix polynomial
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008068.png" /></td> </tr></table>
+
\begin{equation*} \Delta ( \Lambda , M ) = \text { Det } [ E \bigotimes \Lambda - A \bigotimes M ] = \end{equation*}
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008069.png" /></td> </tr></table>
+
\begin{equation*} = \sum _ { i = 0 } ^ { m } D _ { i , m - i } \Lambda ^ { i } M ^ { m - i } , D _ { i j } \in C ^ { n \times n }, \end{equation*}
  
be its matrix characteristic polynomial. The pair <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008070.png" /> is called the block-eigenvalue pair of the pair <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008071.png" />.
+
be its matrix characteristic polynomial. The pair $( \Lambda , M )$ is called the block-eigenvalue pair of the pair $E , A$.
  
 
Then [[#References|[a6]]]
 
Then [[#References|[a6]]]
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008072.png" /></td> </tr></table>
+
\begin{equation*} \Delta ( A , E ) = \sum _ { i = 0 } ^ { m } I \bigotimes D _ { i , n - i } A ^ { i } E ^ { m - i } = 0. \end{equation*}
  
 
The Cayley–Hamilton theorem can be also extended to singular two-dimensional linear systems described by Roesser-type or Fomasini–Marchesini-type models [[#References|[a3]]], [[#References|[a14]]]. The singular two-dimensional Roesser model is given by
 
The Cayley–Hamilton theorem can be also extended to singular two-dimensional linear systems described by Roesser-type or Fomasini–Marchesini-type models [[#References|[a3]]], [[#References|[a14]]]. The singular two-dimensional Roesser model is given by
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008073.png" /></td> </tr></table>
+
\begin{equation*} \left[ \begin{array} { c c } { E _ { 1 } } &amp; { E _ { 2 } } \\ { E _ { 3 } } &amp; { E _ { 4 } } \end{array} \right] \left[ \begin{array} { c } { x _ { i + 1} ^ { h } , j } \\ { x _ { i ,\, j + 1 } ^ { \nu } } \end{array} \right] = \left[ \begin{array} { c c } { A _ { 1 } } &amp; { A _ { 2 } } \\ { A _ { 3 } } &amp; { A _ { 4 } } \end{array} \right] \left[ \begin{array} { c } { x _ { i j } ^ { h } } \\ { x _ { i j } ^ { \nu } } \end{array} \right] + \left[ \begin{array} { c } { B _ { 1 } } \\ { B _ { 2 } } \end{array} \right] u _ { ij }, \end{equation*}
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008074.png" /></td> </tr></table>
+
\begin{equation*} i , j, \in \mathbf{Z}_+ . \end{equation*}
  
Here, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008075.png" /> is the set of non-negative integers; <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008076.png" />, respectively <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008077.png" />, are the horizontal, respectively vertical, semi-state vector at the point <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008078.png" />; <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008079.png" /> is the input vector; <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008080.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008081.png" /> (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008082.png" />) and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008083.png" /> (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008084.png" />) have dimensions compatible with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008085.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008086.png" />; and
+
Here, ${\bf Z}_+$ is the set of non-negative integers; $x _ { i j } ^ { h } \in \mathbf{R} ^ { n _ { 1 } }$, respectively $x _ { i j } ^ { v } \in \mathbf{R} ^ { n _ { 2 } }$, are the horizontal, respectively vertical, semi-state vector at the point $( i , j )$; $u _ { ij } \in \mathbf{R} ^ { m }$ is the input vector; $E _ { k }$, $A _ { k }$ ($k = 1 , \dots , 4$) and $B _ { i }$ ($i = 1,2$) have dimensions compatible with $x _ { i j } ^ { h }$ and $x _ { i j } ^ { \nu }$; and
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008087.png" /></td> </tr></table>
+
\begin{equation*} \left[ \begin{array} { l l } { E _ { 1 } } &amp; { E _ { 2 } } \\ { E _ { 3 } } &amp; { E _ { 4 } } \end{array} \right] \end{equation*}
  
 
may be singular. The characteristic polynomial has the form
 
may be singular. The characteristic polynomial has the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008088.png" /></td> </tr></table>
+
\begin{equation*} \Delta ( z _ { 1 } , z _ { 2 } ) = \operatorname { det } \left[ \begin{array} { c c } { E _ { 1 } z _ { 1 } - A _ { 1 } } &amp; { E _ { 2 } z _ { 2 } - A _ { 2 } } \\ { E _ { 3 } z _ { 1 } - A _ { 3 } } &amp; { E _ { 4 } z _ { 2 } - A_4 } \end{array} \right] = \end{equation*}
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008089.png" /></td> </tr></table>
+
\begin{equation*} = \sum _ { i = 0 } ^ { r _ { 1 } } \sum _ { j = 0 } ^ { r _ { 2 } } a _ { i j } z _ { 1 } ^ { i } z _ { 2 } ^ { j } \end{equation*}
  
and the transition matrices <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008090.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008091.png" />, are defined by
+
and the transition matrices $T _ { p , q }$, $p , q \in \mathbf{Z} _ { + }$, are defined by
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008092.png" /></td> </tr></table>
+
\begin{equation*} \left[ \begin{array} { l l } { E _ { l } } &amp; { 0 } \\ { E _ { 3 } } &amp; { 0 } \end{array} \right] T _ { p , q - 1 } + \left[ \begin{array} { l l } { 0 } &amp; { E _ { 2 } } \\ { 0 } &amp; { E _ { 4 } } \end{array} \right] T _ { p - 1 , q } + \end{equation*}
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008093.png" /></td> </tr></table>
+
\begin{equation*} + \left[ \begin{array} { l l } { A _ { 1 } } &amp; { A _ { 2 } } \\ { A _ { 3 } } &amp; { A _ { 4 } 4 } \end{array} \right] T _ { p - l , q - 1 } = \end{equation*}
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008094.png" /></td> </tr></table>
+
\begin{equation*} = \left\{ \begin{array} { l l } { I _ { n } , } &amp; { p = q = 0, } \\ { 0 , } &amp; { p \neq 0 \text { or } / \text { and } q \neq 0. } \end{array} \right. \end{equation*}
  
If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008095.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008096.png" /> (the standard Roesser model), then the transition matrices <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008097.png" /> may be computed recursively, using the formula <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008098.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c12008099.png" />,
+
If $E = I _ { n }$, $n = n_l+ n_2$ (the standard Roesser model), then the transition matrices $T _ { p q }$ may be computed recursively, using the formula $T _ { p q } = T _ { 10 } T _ { p - 1 , q } + T _ { 01 } T _ { p , q - 1 }$, where $T _ { 00 } = I _ { n }$,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c120080100.png" /></td> </tr></table>
+
\begin{equation*} T _ { 10 } = \left[ \begin{array} { c c } { A _ { 1 } } &amp; { A _ { 2 } } \\ { 0 } &amp; { 0 } \end{array} \right] ,\; T _ { 01 } = \left[ \begin{array} { c c } { 0 } &amp; { 0 } \\ { A _ { 3 } } &amp; { A _ { 4 } } \end{array} \right]. \end{equation*}
  
The matrices <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c120080101.png" /> satisfy the equation [[#References|[a3]]]
+
The matrices $T _ { p q }$ satisfy the equation [[#References|[a3]]]
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c120080102.png" /></td> </tr></table>
+
\begin{equation*} \sum _ { i = 0 } ^ { r _ { 1 } } \sum _ { j = 0 } ^ { r _ { 2 } } a _ { i j } T _ { i j } = 0 \end{equation*}
  
 
The singular two-dimensional Fornasini–Marchesini model is given by
 
The singular two-dimensional Fornasini–Marchesini model is given by
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c120080103.png" /></td> </tr></table>
+
\begin{equation*} E x _ { i + 1 ,\, j + 1 } = A _ { 0} x _ {i j }  + A _ { 1 } x _ { i + 1 ,\, j } + A _ { 2 } x _ { i ,\, j + 1 } + B u _ { i j }, \end{equation*}
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c120080104.png" /></td> </tr></table>
+
\begin{equation*} i , j \in \mathbf Z _ { + }, \end{equation*}
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c120080105.png" /> is the local semi-vector at the point <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c120080106.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c120080107.png" /> is the input vector, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c120080108.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c120080109.png" /> is possibly singular. The characteristic polynomial has the form
+
where $x _ { i j } \in \mathbf{R} ^ { n }$ is the local semi-vector at the point $( i , j )$, $u _ { ij } \in \mathbf{R} ^ { m }$ is the input vector, $E , A _ { k } \in \mathbf{R} ^ { n \times m }$ and $E$ is possibly singular. The characteristic polynomial has the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c120080110.png" /></td> </tr></table>
+
\begin{equation*} \Delta ( z _ { l } , z _ { 2 } ) = \operatorname { det } [ E z _ { 1 } z _ { 2 } - A _ { 1 } z _ { 1 } - A _ { 2 } z _ { 2 } - A _ { 0 } ] = \end{equation*}
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c120080111.png" /></td> </tr></table>
+
\begin{equation*} = \sum _ { i = 0 } ^ { r _ { 1 } } \sum _ { i = 0 } ^ { r _ { 2 } } a _ { i j } z_{1}^ {i} z _ { 2 } ^ { j } \end{equation*}
  
and the transition matrices <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c120080112.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c120080113.png" />, are defined by
+
and the transition matrices $T _ { p , q }$, $p , q \in \mathbf{Z} _ { + }$, are defined by
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c120080114.png" /></td> </tr></table>
+
\begin{equation*} E T _ { p q } - A _ { 0 } T _ { p - 1 , q - 1 } - A _ { 1 } T _ { p , q - 1 } - A _ { 2 } T _ { p - 1 , q } = \end{equation*}
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c120080115.png" /></td> </tr></table>
+
\begin{equation*} = \left\{ \begin{array} { l l } { I _ { n } , } &amp; { p = q = 0, } \\ { 0 , } &amp; { p \neq 0 \text { or } / \text { and } q \neq 0. } \end{array} \right. \end{equation*}
  
The matrices <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c120080116.png" /> satisfy the equation
+
The matrices $T _ { p q }$ satisfy the equation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c120080117.png" /></td> </tr></table>
+
\begin{equation*} \sum _ { i = 0 } ^ { r _ { 1 } } \sum _ { i = 0 } ^ { r _ { 2 } } a _ { i j } T _ { i j } = 0 \end{equation*}
  
 
The theorems may be also extended to two-dimensional continuous-discrete linear systems [[#References|[a5]]].
 
The theorems may be also extended to two-dimensional continuous-discrete linear systems [[#References|[a5]]].
  
 
====References====
 
====References====
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  F.R. Chang,  C.N. Chen,  "The generalized Cayley–Hamilton theorem for standard pencils"  ''Systems and Control Lett.'' , '''18'''  (1992)  pp. 179–182</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  F.R. Gantmacher,  "The theory of matrices" , '''2''' , Chelsea  (1974)</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top">  T. Kaczorek,  "Linear control systems" , '''I–II''' , Research Studies Press  (1992/93)</TD></TR><TR><TD valign="top">[a4]</TD> <TD valign="top">  T. Kaczorek,  "An extension of the Cayley–Hamilton theorem for non-square blocks matrices and computation of the left and right inverses of matrices"  ''Bull. Polon. Acad. Sci. Techn.'' , '''43''' :  1  (1995)  pp. 49–56</TD></TR><TR><TD valign="top">[a5]</TD> <TD valign="top">  T. Kaczorek,  "Extensions of the Cayley Hamilton theorem for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c120080118.png" />-D continuous discrete linear systems"  ''Appl. Math. and Comput. Sci.'' , '''4''' :  4  (1994)  pp. 507–515</TD></TR><TR><TD valign="top">[a6]</TD> <TD valign="top">  T. Kaczorek,  "An extension of the Cayley–Hamilton theorem for a standard pair of block matrices"  ''Appl. Math. and Comput. Sci.'' , '''8''' :  3  (1998)  pp. 511–516</TD></TR><TR><TD valign="top">[a7]</TD> <TD valign="top">  T. Kaczorek,  "An extension of Cayley–Hamillon theorem for singular <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c120080119.png" />-D linear systems with non-square matrices"  ''Bull. Polon. Acad. Sci. Techn.'' , '''43''' :  1  (1995)  pp. 39–48</TD></TR><TR><TD valign="top">[a8]</TD> <TD valign="top">  T. Kaczorek,  "Generalizations of the Cayley–Hamilton theorem for nonsquare matrices"  ''Prace Sem. Podstaw Elektrotechnik. i Teor. Obwodów'' , '''XVIII–SPETO'''  (1995)  pp. 77–83</TD></TR><TR><TD valign="top">[a9]</TD> <TD valign="top">  P. Lancaster,  "Theory of matrices" , Acad. Press  (1969)</TD></TR><TR><TD valign="top">[a10]</TD> <TD valign="top">  F.L. Lewis,  "Cayley--Hamilton theorem and Fadeev's method for the matrix pencil <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c120080120.png" />" , ''Proc. 22nd IEEE Conf Decision Control''  (1982)  pp. 1282–1288</TD></TR><TR><TD valign="top">[a11]</TD> <TD valign="top">  F.L. Lewis,  "Further remarks on the Cayley–Hamilton theorem and Leverrie's method for the matrix pencil <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c120080121.png" />"  ''IEEE Trans. Automat. Control'' , '''31'''  (1986)  pp. 869–870</TD></TR><TR><TD valign="top">[a12]</TD> <TD valign="top">  B.G. Mertzios,  M.A. Christodoulous,  "On the generalized Cayley–Hamilton theorem"  ''IEEE Trans. Automat. Control'' , '''31'''  (1986)  pp. 156–157</TD></TR><TR><TD valign="top">[a13]</TD> <TD valign="top">  N.M. Smart,  S. Barnett,  "The algebra of matrices in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c120/c120080/c120080122.png" />-dimensional systems"  ''Math. Control Inform.'' , '''6'''  (1989)  pp. 121–133</TD></TR><TR><TD valign="top">[a14]</TD> <TD valign="top">  N.J. Theodoru,  "A Hamilton theorem"  ''IEEE Trans. Automat. Control'' , '''AC–34''' :  5  (1989)  pp. 563–565</TD></TR><TR><TD valign="top">[a15]</TD> <TD valign="top">  J. Victoria,  "A block-Cayley–Hamilton theorem"  ''Bull. Math. Soc. Sci. Math. Roum.'' , '''26''' :  1  (1982)  pp. 93–97</TD></TR></table>
+
<table><tr><td valign="top">[a1]</td> <td valign="top">  F.R. Chang,  C.N. Chen,  "The generalized Cayley–Hamilton theorem for standard pencils"  ''Systems and Control Lett.'' , '''18'''  (1992)  pp. 179–182</td></tr><tr><td valign="top">[a2]</td> <td valign="top">  F.R. Gantmacher,  "The theory of matrices" , '''2''' , Chelsea  (1974)</td></tr><tr><td valign="top">[a3]</td> <td valign="top">  T. Kaczorek,  "Linear control systems" , '''I–II''' , Research Studies Press  (1992/93)</td></tr><tr><td valign="top">[a4]</td> <td valign="top">  T. Kaczorek,  "An extension of the Cayley–Hamilton theorem for non-square blocks matrices and computation of the left and right inverses of matrices"  ''Bull. Polon. Acad. Sci. Techn.'' , '''43''' :  1  (1995)  pp. 49–56</td></tr><tr><td valign="top">[a5]</td> <td valign="top">  T. Kaczorek,  "Extensions of the Cayley Hamilton theorem for $2$-D continuous discrete linear systems"  ''Appl. Math. and Comput. Sci.'' , '''4''' :  4  (1994)  pp. 507–515</td></tr><tr><td valign="top">[a6]</td> <td valign="top">  T. Kaczorek,  "An extension of the Cayley–Hamilton theorem for a standard pair of block matrices"  ''Appl. Math. and Comput. Sci.'' , '''8''' :  3  (1998)  pp. 511–516</td></tr><tr><td valign="top">[a7]</td> <td valign="top">  T. Kaczorek,  "An extension of Cayley–Hamillon theorem for singular $2$-D linear systems with non-square matrices"  ''Bull. Polon. Acad. Sci. Techn.'' , '''43''' :  1  (1995)  pp. 39–48</td></tr><tr><td valign="top">[a8]</td> <td valign="top">  T. Kaczorek,  "Generalizations of the Cayley–Hamilton theorem for nonsquare matrices"  ''Prace Sem. Podstaw Elektrotechnik. i Teor. Obwodów'' , '''XVIII–SPETO'''  (1995)  pp. 77–83</td></tr><tr><td valign="top">[a9]</td> <td valign="top">  P. Lancaster,  "Theory of matrices" , Acad. Press  (1969)</td></tr><tr><td valign="top">[a10]</td> <td valign="top">  F.L. Lewis,  "Cayley--Hamilton theorem and Fadeev's method for the matrix pencil $[ s E - A ]$" , ''Proc. 22nd IEEE Conf Decision Control''  (1982)  pp. 1282–1288</td></tr><tr><td valign="top">[a11]</td> <td valign="top">  F.L. Lewis,  "Further remarks on the Cayley–Hamilton theorem and Leverrie's method for the matrix pencil $[ s E - A ]$"  ''IEEE Trans. Automat. Control'' , '''31'''  (1986)  pp. 869–870</td></tr><tr><td valign="top">[a12]</td> <td valign="top">  B.G. Mertzios,  M.A. Christodoulous,  "On the generalized Cayley–Hamilton theorem"  ''IEEE Trans. Automat. Control'' , '''31'''  (1986)  pp. 156–157</td></tr><tr><td valign="top">[a13]</td> <td valign="top">  N.M. Smart,  S. Barnett,  "The algebra of matrices in $n$-dimensional systems"  ''Math. Control Inform.'' , '''6'''  (1989)  pp. 121–133</td></tr><tr><td valign="top">[a14]</td> <td valign="top">  N.J. Theodoru,  "A Hamilton theorem"  ''IEEE Trans. Automat. Control'' , '''AC–34''' :  5  (1989)  pp. 563–565</td></tr><tr><td valign="top">[a15]</td> <td valign="top">  J. Victoria,  "A block-Cayley–Hamilton theorem"  ''Bull. Math. Soc. Sci. Math. Roum.'' , '''26''' :  1  (1982)  pp. 93–97</td></tr></table>

Latest revision as of 17:01, 1 July 2020

Let $C ^ { n \times m}$ be the set of complex $( n \times m )$-matrices and $A \in C ^ { n \times n }$. Let

\begin{equation*} \varphi ( s ) = \operatorname { det } [ I _ { n } \lambda - A ] = \sum _ { i = 0 } ^ { n } a _ { i } \lambda ^ { i } ( a _ { n } = 1 ) \end{equation*}

be the characteristic polynomial of $A$, where $ { I } _ { n }$ is the $( n \times n )$ identity matrix. The Cayley–Hamilton theorem says [a2], [a9] that every square matrix satisfies its own characteristic equation, i.e.

\begin{equation*} \varphi ( A ) = \sum _ { i = 0 } ^ { n } a _ { i } A ^ { i } = 0, \end{equation*}

where $0$ is the zero-matrix.

The classical Cayley–Hamilton theorem can be extended to rectangle matrices. A matrix $A \in C ^ { m \times n }$ for $n > m$ may be written as $A = [ A _ { 1 } , A _ { 2 } ]$, $A _ { 1 } \in C ^ { m \times m }$, $A _ { 2 } \in C ^ { m \times ( n - m ) }$. Let

\begin{equation*} \operatorname { det } [ I _ { n } \lambda - A _ { 1 } ] = \sum _ { i = 0 } ^ { m } a _ { i } \lambda ^ { i } ( a _ { m } = 1 ). \end{equation*}

Then the matrix $A \in C ^ { m \times n }$ ($n > m$) satisfies the equation [a8]

\begin{equation*} \sum _ { i = 0 } ^ { m } a _ { m - i } [ A _ { 1 } ^ { m - i } , A _ { 1 } ^ { n - i - 1 } A _ { 2 } ] = 0. \end{equation*}

A matrix $A \in C ^ { m \times n }$ ($m > n$) may be written as

\begin{equation*} A = \left[ \begin{array} { l } { A _ { 1 } } \\ { A _ { 2 } } \end{array} \right] , \quad A _ { 1 } \in C ^ { n \times n } , A _ { 2 } \in C ^ { ( m - n ) \times n }. \end{equation*}

Let

\begin{equation*} \operatorname { det } [ I _ { n } \lambda - A _ { 1 } ] = \sum _ { i = 0 } ^ { n } a _ { i } \lambda ^ { i } ( a _ { n } = 1 ). \end{equation*}

Then the matrix $A \in C ^ { m \times n }$ ($m > n$) satisfies the equation [a8]

\begin{equation*} \sum _ { i = 0 } ^ { n } a _ { n - 1 } \left[ \begin{array} { c } { A _ { 1 } ^ { m - i } } \\ { A _ { 2 } A _ { 1 } ^ { m - i - 1 } } \end{array} \right] = 0 _ { m n }. \end{equation*}

The Cayley–Hamilton theorem can be also extended to block matrices ([a4], [a13], [a15]). Let

\begin{equation} \tag{a1} A _ { 1 } = \left[ \begin{array} { c c c } { A _ { 11 } } & { \dots } & { A _ { 1 m } } \\ { \dots } & { \dots } & { \dots } \\ { A _ { m 1 } } & { \dots } & { A _ { m m } } \end{array} \right] \in C ^ { m n \times m n }, \end{equation}

where $A _ {i j } \in C ^ { n \times n }$ are commutative, i.e. $A _ { i j } A _ { k l } = A _ { k l } A _ { i j }$ for all $i , j , k = 1 , \dots , m$. Let

\begin{equation*} \Delta ( \Lambda ) = \operatorname { Det } [ I _ { m } \bigotimes \Lambda - A _ { 1 } ] = \end{equation*}

\begin{equation*} = \Lambda ^ { m } + D _ { 1 } \Lambda ^ { m - 1 } + \ldots + D _ { m - 1 } \Lambda + D _ { m } , D _ { k } \in C ^ { n \times n } , k = 1 , \ldots , m, \end{equation*}

be the matrix characteristic polynomial and let $\Delta \in C ^ { n \times n }$ be the matrix (block) eigenvalue of $A _ { 1 }$, where $\otimes$ denotes the Kronecker product. The matrix $\Delta ( \Lambda )$ is obtained by developing the determinant of $[ l _ { m } \otimes \Lambda - A _ { 1 } ]$, considering its commuting blocks as elements [a15].

The block matrix (a1) satisfies the equation [a15]

\begin{equation*} \Delta ( A _ { 1 } ) = \sum _ { i = 0 } ^ { m } ( I _ { m } \bigotimes D _ { m - i } ) A _ { 1 } ^ { i } = 0 ( D _ { 0 } = I _ { n } ). \end{equation*}

Consider now a rectangular block matrix $A = [ A_{l} , A _ { 2 } ] \in C ^ { mn \times ( m n + p )}$, where $A _ { 1 }$ has the form (a1) and $A _ { 2 } \in C ^ { m n \times p }$ ($p > 0$). The matrix $A$ satisfies the equation [a4]

\begin{equation*} \sum _ { l = 0 } ^ { m } ( I _ { m } \bigotimes D _ { m - i } ) [ A _ { 1 } ^ { i + 1 } , A _ { 1 } ^ { i } A _ { 2 } ] = 0 ( D _ { 0 } = I _ { n } ). \end{equation*}

If $A = \left[ \begin{array} { l } { A _ { 1 } } \\ { A _ { 2 } } \end{array} \right] \in C ^ { ( m n + p ) \times m }$, where $A _ { 1 }$ has the form (a1) and $A _ { 2 } \in C ^ { p \times m n }$, then

\begin{equation*} \sum _ { i = 0 } ^ { m } \left[ \begin{array} { l } { A _ { 1 } } \\ { A _ { 2 } } \end{array} \right] ( I _ { m } \bigotimes D _ { m - i } ) A _ { 1 } ^ { i } = 0 ( D _ { 0 } = I _ { n } ). \end{equation*}

A pair of matrices $E , A \in C ^ { n \times n }$ is called regular if $\operatorname { det } [ E \lambda - A ] \neq 0$ for some $\lambda \in \mathbf{C}$ [a10], [a11], [a12]. The pair is called standard if there exist scalars $\alpha , \beta \in \bf{C}$ such that $E \alpha + A \beta = I _ { n }$. If the pair $E , A \in C ^ { n \times n }$ is regular, then the pair

\begin{equation} \tag{a2} \overline{E} = [ E \lambda - A ] ^ { - 1 } E , \overline{A} = [ E \lambda - A ] ^ { - 1 } A \end{equation}

is standard. If the pair $E , A \in C ^ { n \times n }$ is standard, then it is also commutative ($E A = A E$). Let a pair $E , A \in C ^ { n \times n }$ be standard (commutative) and

\begin{equation*} \Delta ( \lambda , \mu ) = \operatorname { det } [ E \lambda - A \mu ] = \sum _ { i = 0 } ^ { n } a _ { i , n - i } \lambda ^ { i } \mu ^ { n - i }. \end{equation*}

Then the pair satisfies the equation [a1]

\begin{equation*} \Delta ( A , E ) = \sum _ { i = 0 } ^ { n } a _ { i , n - i }A ^ { i } E ^ { n - i } = 0. \end{equation*}

In a particular case, with $\operatorname { det } [ E \lambda - A ] = \sum _ { i = 0 } ^ { n } a _ { i } s ^ { i }$, it follows that $\sum _ { i = 0 } ^ { n } a _ { i } A ^ { i } E ^ { n - i } = 0$.

Let $P _ { n } ( C )$ be the set of $n$-order square complex matrices that commute in pairs and let $M _ { m } ( P _ { n } )$ be the set of square matrices partitioned in $m ^ { 2 }$ blocks belonging to $P _ { n } ( C )$.

Consider a standard pair of block matrices $E,A \in M _ { m } ( P _ { n } )$ and let the matrix polynomial

\begin{equation*} \Delta ( \Lambda , M ) = \text { Det } [ E \bigotimes \Lambda - A \bigotimes M ] = \end{equation*}

\begin{equation*} = \sum _ { i = 0 } ^ { m } D _ { i , m - i } \Lambda ^ { i } M ^ { m - i } , D _ { i j } \in C ^ { n \times n }, \end{equation*}

be its matrix characteristic polynomial. The pair $( \Lambda , M )$ is called the block-eigenvalue pair of the pair $E , A$.

Then [a6]

\begin{equation*} \Delta ( A , E ) = \sum _ { i = 0 } ^ { m } I \bigotimes D _ { i , n - i } A ^ { i } E ^ { m - i } = 0. \end{equation*}

The Cayley–Hamilton theorem can be also extended to singular two-dimensional linear systems described by Roesser-type or Fomasini–Marchesini-type models [a3], [a14]. The singular two-dimensional Roesser model is given by

\begin{equation*} \left[ \begin{array} { c c } { E _ { 1 } } & { E _ { 2 } } \\ { E _ { 3 } } & { E _ { 4 } } \end{array} \right] \left[ \begin{array} { c } { x _ { i + 1} ^ { h } , j } \\ { x _ { i ,\, j + 1 } ^ { \nu } } \end{array} \right] = \left[ \begin{array} { c c } { A _ { 1 } } & { A _ { 2 } } \\ { A _ { 3 } } & { A _ { 4 } } \end{array} \right] \left[ \begin{array} { c } { x _ { i j } ^ { h } } \\ { x _ { i j } ^ { \nu } } \end{array} \right] + \left[ \begin{array} { c } { B _ { 1 } } \\ { B _ { 2 } } \end{array} \right] u _ { ij }, \end{equation*}

\begin{equation*} i , j, \in \mathbf{Z}_+ . \end{equation*}

Here, ${\bf Z}_+$ is the set of non-negative integers; $x _ { i j } ^ { h } \in \mathbf{R} ^ { n _ { 1 } }$, respectively $x _ { i j } ^ { v } \in \mathbf{R} ^ { n _ { 2 } }$, are the horizontal, respectively vertical, semi-state vector at the point $( i , j )$; $u _ { ij } \in \mathbf{R} ^ { m }$ is the input vector; $E _ { k }$, $A _ { k }$ ($k = 1 , \dots , 4$) and $B _ { i }$ ($i = 1,2$) have dimensions compatible with $x _ { i j } ^ { h }$ and $x _ { i j } ^ { \nu }$; and

\begin{equation*} \left[ \begin{array} { l l } { E _ { 1 } } & { E _ { 2 } } \\ { E _ { 3 } } & { E _ { 4 } } \end{array} \right] \end{equation*}

may be singular. The characteristic polynomial has the form

\begin{equation*} \Delta ( z _ { 1 } , z _ { 2 } ) = \operatorname { det } \left[ \begin{array} { c c } { E _ { 1 } z _ { 1 } - A _ { 1 } } & { E _ { 2 } z _ { 2 } - A _ { 2 } } \\ { E _ { 3 } z _ { 1 } - A _ { 3 } } & { E _ { 4 } z _ { 2 } - A_4 } \end{array} \right] = \end{equation*}

\begin{equation*} = \sum _ { i = 0 } ^ { r _ { 1 } } \sum _ { j = 0 } ^ { r _ { 2 } } a _ { i j } z _ { 1 } ^ { i } z _ { 2 } ^ { j } \end{equation*}

and the transition matrices $T _ { p , q }$, $p , q \in \mathbf{Z} _ { + }$, are defined by

\begin{equation*} \left[ \begin{array} { l l } { E _ { l } } & { 0 } \\ { E _ { 3 } } & { 0 } \end{array} \right] T _ { p , q - 1 } + \left[ \begin{array} { l l } { 0 } & { E _ { 2 } } \\ { 0 } & { E _ { 4 } } \end{array} \right] T _ { p - 1 , q } + \end{equation*}

\begin{equation*} + \left[ \begin{array} { l l } { A _ { 1 } } & { A _ { 2 } } \\ { A _ { 3 } } & { A _ { 4 } 4 } \end{array} \right] T _ { p - l , q - 1 } = \end{equation*}

\begin{equation*} = \left\{ \begin{array} { l l } { I _ { n } , } & { p = q = 0, } \\ { 0 , } & { p \neq 0 \text { or } / \text { and } q \neq 0. } \end{array} \right. \end{equation*}

If $E = I _ { n }$, $n = n_l+ n_2$ (the standard Roesser model), then the transition matrices $T _ { p q }$ may be computed recursively, using the formula $T _ { p q } = T _ { 10 } T _ { p - 1 , q } + T _ { 01 } T _ { p , q - 1 }$, where $T _ { 00 } = I _ { n }$,

\begin{equation*} T _ { 10 } = \left[ \begin{array} { c c } { A _ { 1 } } & { A _ { 2 } } \\ { 0 } & { 0 } \end{array} \right] ,\; T _ { 01 } = \left[ \begin{array} { c c } { 0 } & { 0 } \\ { A _ { 3 } } & { A _ { 4 } } \end{array} \right]. \end{equation*}

The matrices $T _ { p q }$ satisfy the equation [a3]

\begin{equation*} \sum _ { i = 0 } ^ { r _ { 1 } } \sum _ { j = 0 } ^ { r _ { 2 } } a _ { i j } T _ { i j } = 0 \end{equation*}

The singular two-dimensional Fornasini–Marchesini model is given by

\begin{equation*} E x _ { i + 1 ,\, j + 1 } = A _ { 0} x _ {i j } + A _ { 1 } x _ { i + 1 ,\, j } + A _ { 2 } x _ { i ,\, j + 1 } + B u _ { i j }, \end{equation*}

\begin{equation*} i , j \in \mathbf Z _ { + }, \end{equation*}

where $x _ { i j } \in \mathbf{R} ^ { n }$ is the local semi-vector at the point $( i , j )$, $u _ { ij } \in \mathbf{R} ^ { m }$ is the input vector, $E , A _ { k } \in \mathbf{R} ^ { n \times m }$ and $E$ is possibly singular. The characteristic polynomial has the form

\begin{equation*} \Delta ( z _ { l } , z _ { 2 } ) = \operatorname { det } [ E z _ { 1 } z _ { 2 } - A _ { 1 } z _ { 1 } - A _ { 2 } z _ { 2 } - A _ { 0 } ] = \end{equation*}

\begin{equation*} = \sum _ { i = 0 } ^ { r _ { 1 } } \sum _ { i = 0 } ^ { r _ { 2 } } a _ { i j } z_{1}^ {i} z _ { 2 } ^ { j } \end{equation*}

and the transition matrices $T _ { p , q }$, $p , q \in \mathbf{Z} _ { + }$, are defined by

\begin{equation*} E T _ { p q } - A _ { 0 } T _ { p - 1 , q - 1 } - A _ { 1 } T _ { p , q - 1 } - A _ { 2 } T _ { p - 1 , q } = \end{equation*}

\begin{equation*} = \left\{ \begin{array} { l l } { I _ { n } , } & { p = q = 0, } \\ { 0 , } & { p \neq 0 \text { or } / \text { and } q \neq 0. } \end{array} \right. \end{equation*}

The matrices $T _ { p q }$ satisfy the equation

\begin{equation*} \sum _ { i = 0 } ^ { r _ { 1 } } \sum _ { i = 0 } ^ { r _ { 2 } } a _ { i j } T _ { i j } = 0 \end{equation*}

The theorems may be also extended to two-dimensional continuous-discrete linear systems [a5].

References

[a1] F.R. Chang, C.N. Chen, "The generalized Cayley–Hamilton theorem for standard pencils" Systems and Control Lett. , 18 (1992) pp. 179–182
[a2] F.R. Gantmacher, "The theory of matrices" , 2 , Chelsea (1974)
[a3] T. Kaczorek, "Linear control systems" , I–II , Research Studies Press (1992/93)
[a4] T. Kaczorek, "An extension of the Cayley–Hamilton theorem for non-square blocks matrices and computation of the left and right inverses of matrices" Bull. Polon. Acad. Sci. Techn. , 43 : 1 (1995) pp. 49–56
[a5] T. Kaczorek, "Extensions of the Cayley Hamilton theorem for $2$-D continuous discrete linear systems" Appl. Math. and Comput. Sci. , 4 : 4 (1994) pp. 507–515
[a6] T. Kaczorek, "An extension of the Cayley–Hamilton theorem for a standard pair of block matrices" Appl. Math. and Comput. Sci. , 8 : 3 (1998) pp. 511–516
[a7] T. Kaczorek, "An extension of Cayley–Hamillon theorem for singular $2$-D linear systems with non-square matrices" Bull. Polon. Acad. Sci. Techn. , 43 : 1 (1995) pp. 39–48
[a8] T. Kaczorek, "Generalizations of the Cayley–Hamilton theorem for nonsquare matrices" Prace Sem. Podstaw Elektrotechnik. i Teor. Obwodów , XVIII–SPETO (1995) pp. 77–83
[a9] P. Lancaster, "Theory of matrices" , Acad. Press (1969)
[a10] F.L. Lewis, "Cayley--Hamilton theorem and Fadeev's method for the matrix pencil $[ s E - A ]$" , Proc. 22nd IEEE Conf Decision Control (1982) pp. 1282–1288
[a11] F.L. Lewis, "Further remarks on the Cayley–Hamilton theorem and Leverrie's method for the matrix pencil $[ s E - A ]$" IEEE Trans. Automat. Control , 31 (1986) pp. 869–870
[a12] B.G. Mertzios, M.A. Christodoulous, "On the generalized Cayley–Hamilton theorem" IEEE Trans. Automat. Control , 31 (1986) pp. 156–157
[a13] N.M. Smart, S. Barnett, "The algebra of matrices in $n$-dimensional systems" Math. Control Inform. , 6 (1989) pp. 121–133
[a14] N.J. Theodoru, "A Hamilton theorem" IEEE Trans. Automat. Control , AC–34 : 5 (1989) pp. 563–565
[a15] J. Victoria, "A block-Cayley–Hamilton theorem" Bull. Math. Soc. Sci. Math. Roum. , 26 : 1 (1982) pp. 93–97
How to Cite This Entry:
Cayley-Hamilton theorem. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Cayley-Hamilton_theorem&oldid=22271
This article was adapted from an original article by T. Kaczorek (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article