Cayley-Hamilton theorem
Let $C ^ { n \times m}$ be the set of complex $( n \times m )$-matrices and $A \in C ^ { n \times n }$. Let
\begin{equation*} \varphi ( s ) = \operatorname { det } [ I _ { n } \lambda - A ] = \sum _ { i = 0 } ^ { n } a _ { i } \lambda ^ { i } ( a _ { n } = 1 ) \end{equation*}
be the characteristic polynomial of $A$, where $ { I } _ { n }$ is the $( n \times n )$ identity matrix. The Cayley–Hamilton theorem says [a2], [a9] that every square matrix satisfies its own characteristic equation, i.e.
\begin{equation*} \varphi ( A ) = \sum _ { i = 0 } ^ { n } a _ { i } A ^ { i } = 0, \end{equation*}
where $0$ is the zero-matrix.
The classical Cayley–Hamilton theorem can be extended to rectangle matrices. A matrix $A \in C ^ { m \times n }$ for $n > m$ may be written as $A = [ A _ { 1 } , A _ { 2 } ]$, $A _ { 1 } \in C ^ { m \times m }$, $A _ { 2 } \in C ^ { m \times ( n - m ) }$. Let
\begin{equation*} \operatorname { det } [ I _ { n } \lambda - A _ { 1 } ] = \sum _ { i = 0 } ^ { m } a _ { i } \lambda ^ { i } ( a _ { m } = 1 ). \end{equation*}
Then the matrix $A \in C ^ { m \times n }$ ($n > m$) satisfies the equation [a8]
\begin{equation*} \sum _ { i = 0 } ^ { m } a _ { m - i } [ A _ { 1 } ^ { m - i } , A _ { 1 } ^ { n - i - 1 } A _ { 2 } ] = 0. \end{equation*}
A matrix $A \in C ^ { m \times n }$ ($m > n$) may be written as
\begin{equation*} A = \left[ \begin{array} { l } { A _ { 1 } } \\ { A _ { 2 } } \end{array} \right] , \quad A _ { 1 } \in C ^ { n \times n } , A _ { 2 } \in C ^ { ( m - n ) \times n }. \end{equation*}
Let
\begin{equation*} \operatorname { det } [ I _ { n } \lambda - A _ { 1 } ] = \sum _ { i = 0 } ^ { n } a _ { i } \lambda ^ { i } ( a _ { n } = 1 ). \end{equation*}
Then the matrix $A \in C ^ { m \times n }$ ($m > n$) satisfies the equation [a8]
\begin{equation*} \sum _ { i = 0 } ^ { n } a _ { n - 1 } \left[ \begin{array} { c } { A _ { 1 } ^ { m - i } } \\ { A _ { 2 } A _ { 1 } ^ { m - i - 1 } } \end{array} \right] = 0 _ { m n }. \end{equation*}
The Cayley–Hamilton theorem can be also extended to block matrices ([a4], [a13], [a15]). Let
\begin{equation} \tag{a1} A _ { 1 } = \left[ \begin{array} { c c c } { A _ { 11 } } & { \dots } & { A _ { 1 m } } \\ { \dots } & { \dots } & { \dots } \\ { A _ { m 1 } } & { \dots } & { A _ { m m } } \end{array} \right] \in C ^ { m n \times m n }, \end{equation}
where $A _ {i j } \in C ^ { n \times n }$ are commutative, i.e. $A _ { i j } A _ { k l } = A _ { k l } A _ { i j }$ for all $i , j , k = 1 , \dots , m$. Let
\begin{equation*} \Delta ( \Lambda ) = \operatorname { Det } [ I _ { m } \bigotimes \Lambda - A _ { 1 } ] = \end{equation*}
\begin{equation*} = \Lambda ^ { m } + D _ { 1 } \Lambda ^ { m - 1 } + \ldots + D _ { m - 1 } \Lambda + D _ { m } , D _ { k } \in C ^ { n \times n } , k = 1 , \ldots , m, \end{equation*}
be the matrix characteristic polynomial and let $\Delta \in C ^ { n \times n }$ be the matrix (block) eigenvalue of $A _ { 1 }$, where $\otimes$ denotes the Kronecker product. The matrix $\Delta ( \Lambda )$ is obtained by developing the determinant of $[ l _ { m } \otimes \Lambda - A _ { 1 } ]$, considering its commuting blocks as elements [a15].
The block matrix (a1) satisfies the equation [a15]
\begin{equation*} \Delta ( A _ { 1 } ) = \sum _ { i = 0 } ^ { m } ( I _ { m } \bigotimes D _ { m - i } ) A _ { 1 } ^ { i } = 0 ( D _ { 0 } = I _ { n } ). \end{equation*}
Consider now a rectangular block matrix $A = [ A_{l} , A _ { 2 } ] \in C ^ { mn \times ( m n + p )}$, where $A _ { 1 }$ has the form (a1) and $A _ { 2 } \in C ^ { m n \times p }$ ($p > 0$). The matrix $A$ satisfies the equation [a4]
\begin{equation*} \sum _ { l = 0 } ^ { m } ( I _ { m } \bigotimes D _ { m - i } ) [ A _ { 1 } ^ { i + 1 } , A _ { 1 } ^ { i } A _ { 2 } ] = 0 ( D _ { 0 } = I _ { n } ). \end{equation*}
If $A = \left[ \begin{array} { l } { A _ { 1 } } \\ { A _ { 2 } } \end{array} \right] \in C ^ { ( m n + p ) \times m }$, where $A _ { 1 }$ has the form (a1) and $A _ { 2 } \in C ^ { p \times m n }$, then
\begin{equation*} \sum _ { i = 0 } ^ { m } \left[ \begin{array} { l } { A _ { 1 } } \\ { A _ { 2 } } \end{array} \right] ( I _ { m } \bigotimes D _ { m - i } ) A _ { 1 } ^ { i } = 0 ( D _ { 0 } = I _ { n } ). \end{equation*}
A pair of matrices $E , A \in C ^ { n \times n }$ is called regular if $\operatorname { det } [ E \lambda - A ] \neq 0$ for some $\lambda \in \mathbf{C}$ [a10], [a11], [a12]. The pair is called standard if there exist scalars $\alpha , \beta \in \bf{C}$ such that $E \alpha + A \beta = I _ { n }$. If the pair $E , A \in C ^ { n \times n }$ is regular, then the pair
\begin{equation} \tag{a2} \overline{E} = [ E \lambda - A ] ^ { - 1 } E , \overline{A} = [ E \lambda - A ] ^ { - 1 } A \end{equation}
is standard. If the pair $E , A \in C ^ { n \times n }$ is standard, then it is also commutative ($E A = A E$). Let a pair $E , A \in C ^ { n \times n }$ be standard (commutative) and
\begin{equation*} \Delta ( \lambda , \mu ) = \operatorname { det } [ E \lambda - A \mu ] = \sum _ { i = 0 } ^ { n } a _ { i , n - i } \lambda ^ { i } \mu ^ { n - i }. \end{equation*}
Then the pair satisfies the equation [a1]
\begin{equation*} \Delta ( A , E ) = \sum _ { i = 0 } ^ { n } a _ { i , n - i }A ^ { i } E ^ { n - i } = 0. \end{equation*}
In a particular case, with $\operatorname { det } [ E \lambda - A ] = \sum _ { i = 0 } ^ { n } a _ { i } s ^ { i }$, it follows that $\sum _ { i = 0 } ^ { n } a _ { i } A ^ { i } E ^ { n - i } = 0$.
Let $P _ { n } ( C )$ be the set of $n$-order square complex matrices that commute in pairs and let $M _ { m } ( P _ { n } )$ be the set of square matrices partitioned in $m ^ { 2 }$ blocks belonging to $P _ { n } ( C )$.
Consider a standard pair of block matrices $E,A \in M _ { m } ( P _ { n } )$ and let the matrix polynomial
\begin{equation*} \Delta ( \Lambda , M ) = \text { Det } [ E \bigotimes \Lambda - A \bigotimes M ] = \end{equation*}
\begin{equation*} = \sum _ { i = 0 } ^ { m } D _ { i , m - i } \Lambda ^ { i } M ^ { m - i } , D _ { i j } \in C ^ { n \times n }, \end{equation*}
be its matrix characteristic polynomial. The pair $( \Lambda , M )$ is called the block-eigenvalue pair of the pair $E , A$.
Then [a6]
\begin{equation*} \Delta ( A , E ) = \sum _ { i = 0 } ^ { m } I \bigotimes D _ { i , n - i } A ^ { i } E ^ { m - i } = 0. \end{equation*}
The Cayley–Hamilton theorem can be also extended to singular two-dimensional linear systems described by Roesser-type or Fomasini–Marchesini-type models [a3], [a14]. The singular two-dimensional Roesser model is given by
\begin{equation*} \left[ \begin{array} { c c } { E _ { 1 } } & { E _ { 2 } } \\ { E _ { 3 } } & { E _ { 4 } } \end{array} \right] \left[ \begin{array} { c } { x _ { i + 1} ^ { h } , j } \\ { x _ { i ,\, j + 1 } ^ { \nu } } \end{array} \right] = \left[ \begin{array} { c c } { A _ { 1 } } & { A _ { 2 } } \\ { A _ { 3 } } & { A _ { 4 } } \end{array} \right] \left[ \begin{array} { c } { x _ { i j } ^ { h } } \\ { x _ { i j } ^ { \nu } } \end{array} \right] + \left[ \begin{array} { c } { B _ { 1 } } \\ { B _ { 2 } } \end{array} \right] u _ { ij }, \end{equation*}
\begin{equation*} i , j, \in \mathbf{Z}_+ . \end{equation*}
Here, ${\bf Z}_+$ is the set of non-negative integers; $x _ { i j } ^ { h } \in \mathbf{R} ^ { n _ { 1 } }$, respectively $x _ { i j } ^ { v } \in \mathbf{R} ^ { n _ { 2 } }$, are the horizontal, respectively vertical, semi-state vector at the point $( i , j )$; $u _ { ij } \in \mathbf{R} ^ { m }$ is the input vector; $E _ { k }$, $A _ { k }$ ($k = 1 , \dots , 4$) and $B _ { i }$ ($i = 1,2$) have dimensions compatible with $x _ { i j } ^ { h }$ and $x _ { i j } ^ { \nu }$; and
\begin{equation*} \left[ \begin{array} { l l } { E _ { 1 } } & { E _ { 2 } } \\ { E _ { 3 } } & { E _ { 4 } } \end{array} \right] \end{equation*}
may be singular. The characteristic polynomial has the form
\begin{equation*} \Delta ( z _ { 1 } , z _ { 2 } ) = \operatorname { det } \left[ \begin{array} { c c } { E _ { 1 } z _ { 1 } - A _ { 1 } } & { E _ { 2 } z _ { 2 } - A _ { 2 } } \\ { E _ { 3 } z _ { 1 } - A _ { 3 } } & { E _ { 4 } z _ { 2 } - A_4 } \end{array} \right] = \end{equation*}
\begin{equation*} = \sum _ { i = 0 } ^ { r _ { 1 } } \sum _ { j = 0 } ^ { r _ { 2 } } a _ { i j } z _ { 1 } ^ { i } z _ { 2 } ^ { j } \end{equation*}
and the transition matrices $T _ { p , q }$, $p , q \in \mathbf{Z} _ { + }$, are defined by
\begin{equation*} \left[ \begin{array} { l l } { E _ { l } } & { 0 } \\ { E _ { 3 } } & { 0 } \end{array} \right] T _ { p , q - 1 } + \left[ \begin{array} { l l } { 0 } & { E _ { 2 } } \\ { 0 } & { E _ { 4 } } \end{array} \right] T _ { p - 1 , q } + \end{equation*}
\begin{equation*} + \left[ \begin{array} { l l } { A _ { 1 } } & { A _ { 2 } } \\ { A _ { 3 } } & { A _ { 4 } 4 } \end{array} \right] T _ { p - l , q - 1 } = \end{equation*}
\begin{equation*} = \left\{ \begin{array} { l l } { I _ { n } , } & { p = q = 0, } \\ { 0 , } & { p \neq 0 \text { or } / \text { and } q \neq 0. } \end{array} \right. \end{equation*}
If $E = I _ { n }$, $n = n_l+ n_2$ (the standard Roesser model), then the transition matrices $T _ { p q }$ may be computed recursively, using the formula $T _ { p q } = T _ { 10 } T _ { p - 1 , q } + T _ { 01 } T _ { p , q - 1 }$, where $T _ { 00 } = I _ { n }$,
\begin{equation*} T _ { 10 } = \left[ \begin{array} { c c } { A _ { 1 } } & { A _ { 2 } } \\ { 0 } & { 0 } \end{array} \right] ,\; T _ { 01 } = \left[ \begin{array} { c c } { 0 } & { 0 } \\ { A _ { 3 } } & { A _ { 4 } } \end{array} \right]. \end{equation*}
The matrices $T _ { p q }$ satisfy the equation [a3]
\begin{equation*} \sum _ { i = 0 } ^ { r _ { 1 } } \sum _ { j = 0 } ^ { r _ { 2 } } a _ { i j } T _ { i j } = 0 \end{equation*}
The singular two-dimensional Fornasini–Marchesini model is given by
\begin{equation*} E x _ { i + 1 ,\, j + 1 } = A _ { 0} x _ {i j } + A _ { 1 } x _ { i + 1 ,\, j } + A _ { 2 } x _ { i ,\, j + 1 } + B u _ { i j }, \end{equation*}
\begin{equation*} i , j \in \mathbf Z _ { + }, \end{equation*}
where $x _ { i j } \in \mathbf{R} ^ { n }$ is the local semi-vector at the point $( i , j )$, $u _ { ij } \in \mathbf{R} ^ { m }$ is the input vector, $E , A _ { k } \in \mathbf{R} ^ { n \times m }$ and $E$ is possibly singular. The characteristic polynomial has the form
\begin{equation*} \Delta ( z _ { l } , z _ { 2 } ) = \operatorname { det } [ E z _ { 1 } z _ { 2 } - A _ { 1 } z _ { 1 } - A _ { 2 } z _ { 2 } - A _ { 0 } ] = \end{equation*}
\begin{equation*} = \sum _ { i = 0 } ^ { r _ { 1 } } \sum _ { i = 0 } ^ { r _ { 2 } } a _ { i j } z_{1}^ {i} z _ { 2 } ^ { j } \end{equation*}
and the transition matrices $T _ { p , q }$, $p , q \in \mathbf{Z} _ { + }$, are defined by
\begin{equation*} E T _ { p q } - A _ { 0 } T _ { p - 1 , q - 1 } - A _ { 1 } T _ { p , q - 1 } - A _ { 2 } T _ { p - 1 , q } = \end{equation*}
\begin{equation*} = \left\{ \begin{array} { l l } { I _ { n } , } & { p = q = 0, } \\ { 0 , } & { p \neq 0 \text { or } / \text { and } q \neq 0. } \end{array} \right. \end{equation*}
The matrices $T _ { p q }$ satisfy the equation
\begin{equation*} \sum _ { i = 0 } ^ { r _ { 1 } } \sum _ { i = 0 } ^ { r _ { 2 } } a _ { i j } T _ { i j } = 0 \end{equation*}
The theorems may be also extended to two-dimensional continuous-discrete linear systems [a5].
References
[a1] | F.R. Chang, C.N. Chen, "The generalized Cayley–Hamilton theorem for standard pencils" Systems and Control Lett. , 18 (1992) pp. 179–182 |
[a2] | F.R. Gantmacher, "The theory of matrices" , 2 , Chelsea (1974) |
[a3] | T. Kaczorek, "Linear control systems" , I–II , Research Studies Press (1992/93) |
[a4] | T. Kaczorek, "An extension of the Cayley–Hamilton theorem for non-square blocks matrices and computation of the left and right inverses of matrices" Bull. Polon. Acad. Sci. Techn. , 43 : 1 (1995) pp. 49–56 |
[a5] | T. Kaczorek, "Extensions of the Cayley Hamilton theorem for $2$-D continuous discrete linear systems" Appl. Math. and Comput. Sci. , 4 : 4 (1994) pp. 507–515 |
[a6] | T. Kaczorek, "An extension of the Cayley–Hamilton theorem for a standard pair of block matrices" Appl. Math. and Comput. Sci. , 8 : 3 (1998) pp. 511–516 |
[a7] | T. Kaczorek, "An extension of Cayley–Hamillon theorem for singular $2$-D linear systems with non-square matrices" Bull. Polon. Acad. Sci. Techn. , 43 : 1 (1995) pp. 39–48 |
[a8] | T. Kaczorek, "Generalizations of the Cayley–Hamilton theorem for nonsquare matrices" Prace Sem. Podstaw Elektrotechnik. i Teor. Obwodów , XVIII–SPETO (1995) pp. 77–83 |
[a9] | P. Lancaster, "Theory of matrices" , Acad. Press (1969) |
[a10] | F.L. Lewis, "Cayley--Hamilton theorem and Fadeev's method for the matrix pencil $[ s E - A ]$" , Proc. 22nd IEEE Conf Decision Control (1982) pp. 1282–1288 |
[a11] | F.L. Lewis, "Further remarks on the Cayley–Hamilton theorem and Leverrie's method for the matrix pencil $[ s E - A ]$" IEEE Trans. Automat. Control , 31 (1986) pp. 869–870 |
[a12] | B.G. Mertzios, M.A. Christodoulous, "On the generalized Cayley–Hamilton theorem" IEEE Trans. Automat. Control , 31 (1986) pp. 156–157 |
[a13] | N.M. Smart, S. Barnett, "The algebra of matrices in $n$-dimensional systems" Math. Control Inform. , 6 (1989) pp. 121–133 |
[a14] | N.J. Theodoru, "A Hamilton theorem" IEEE Trans. Automat. Control , AC–34 : 5 (1989) pp. 563–565 |
[a15] | J. Victoria, "A block-Cayley–Hamilton theorem" Bull. Math. Soc. Sci. Math. Roum. , 26 : 1 (1982) pp. 93–97 |
Cayley–Hamilton theorem. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Cayley%E2%80%93Hamilton_theorem&oldid=22272