Matrix
A rectangular array
![]() | (1) |
consisting of rows and
columns, the entries
of which belong to some set
. (1) is called also an
-dimensional matrix over
, or a matrix of dimensions
over
. Let
denote the set of all
-dimensional matrices over
. If
, then (1) is called a square matrix of order
. The set of all square matrices of order
over
is denoted by
.
Alternative notations for matrices are:
![]() |
In the most important cases the role of is played by the field of real numbers, the field of complex numbers, an arbitrary field, a ring of polynomials, the ring of integers, a ring of functions, or an arbitrary associative ring. The operations of addition and multiplication defined on
are carried over naturally to matrices over
, and in this way one is led to the matrix calculus — the subject matter of the theory of matrices.
The notion of a matrix arose first in the middle of the 19th century in the investigations of W. Hamilton, and A. Cayley. Fundamental results in the theory of matrices are due to K. Weierstrass, C. Jordan and G. Frobenius. I.A. Lappo-Danilevskii has developed the theory of analytic functions of several matrix variables and has applied it to the study of systems of linear differential equations.
Operations with matrices.
Let be an associative ring and let
. Then the sum of the matrices
and
is, by definition,
![]() |
Clearly, and addition of matrices is associative and commutative. The null matrix in
is the matrix 0, all entries of which are zero. For every
,
![]() |
Let and
. The product of the two matrices
and
is defined by the rule
![]() |
where
![]() |
The product of two elements of is always defined and belongs to
. Multiplication of matrices is associative: If
,
and
, then
![]() |
and . The distributivity rule also holds: For
and
,
![]() | (2) |
In particular, (2) holds also for . Consequently,
is an associative ring. If
is a ring with an identity, then the matrix
![]() |
is the identity of the ring :
![]() |
for all . Multiplication of matrices is not commutative: If
, for every associative ring
with an identity there are matrices
such that
.
Let ,
; the product of the matrix
by the element (number, scalar)
is, by definition, the matrix
. Then
![]() |
Let be a ring with an identity. The matrix
is defined as the element of
the only non-zero entry of which is the entry
, which equals 1,
,
. For every
,
![]() |
If is a field, then
is an
-dimensional vector space over
, and the matrices
form a basis in this space.
Block matrices.
Let ,
, where
and
are positive integers. Then a matrix
can be written in the form
![]() | (3) |
where ,
,
. The matrix (3) is called a block matrix. If
,
,
, and
is written in the form
![]() |
then
![]() |
For example, if , then
may be regarded as
, where
.
The matrix of the form
![]() |
where and
is the null matrix, is denoted by
and is called block diagonal. The following holds:
![]() |
![]() |
![]() |
![]() |
provided that the orders of and
coincide for
.
Square matrices over a field.
Let be a field, let
and let
be the determinant of the matrix
.
is said to be non-degenerate (or non-singular) if
. A matrix
is called the inverse of
if
. The invertibility of
in
is equivalent to its non-degeneracy, and
![]() |
where is the cofactor of the entry
,
. For
,
![]() |
The set of all invertible elements of is a group under multiplication, called the general linear group and denoted by
. The powers of a matrix
are defined as follows
![]() |
![]() |
and if is invertible, then
. For the polynomial
![]() |
the matrix polynomial
![]() |
is defined.
Every matrix from gives rise to a linear transformation of the
-dimensional vector space
over
. Let
be a basis in
and let
be a linear transformation of
. Then
is uniquely determined by the set of vectors
![]() |
Moreover,
![]() | (4) |
where . The matrix
is called the matrix of the transformation
in the basis
. For a fixed basis, the matrix
is the matrix of the linear transformation
, while
is the matrix of
if
is the matrix of the linear transformation
. Equality (4) may be written in the form
![]() |
Suppose that is a second basis in
. Then
,
, and
is the matrix of the transformation
in the basis
. Two matrices
are similar if there is a matrix
such that
. Here, also,
and the ranks of the matrices
and
coincide. The linear transformation
is called non-degenerate, or non-singular, if
;
is non-degenerate if and only if its matrix is non-degenerate. If
is regarded as the space of columns
, then every linear transformation in
is given by left multiplication of the columns
by some
:
, and the matrix of
in the basis
![]() |
coincides with . A matrix
is singular (or degenerate) if and only if there is a column
,
, such that
.
Transposition and matrices of special form.
Let . Then the matrix
, where
, is called the transpose of
. Alternative notations are
and
. Let
. Then
, where
is the complex conjugate of the number
, is called the complex conjugate of
. The matrix
, where
, is called the Hermitian conjugate of
. Many matrices used in applications are given special names:'
<tbody> </tbody>
|
Polynomial matrices.
Let be a field and let
be the ring of all polynomials in the variable
with coefficients from
. A matrix over
is called a polynomial matrix. For the elements of the ring
one introduces the following elementary operations: 1) multiplication of a row or column of a matrix by a non-zero element of the field
; and 2) addition to a row (column) of another row (respectively, column) of the given matrix, multiplied by a polynomial from
. Two matrices
are called equivalent
if
can be obtained from
through a finite number of elementary operations.
Let
![]() |
where a) ; b)
is divisible by
for
; and c) the coefficient of the leading term in
is equal to 1. Then
is called a canonical polynomial matrix. Every equivalence class of elements of the ring
contains a unique canonical matrix. If
, where
![]() |
is a canonical matrix, then the polynomials
![]() |
are called the invariant factors of ; the number
is identical with the rank of
. A matrix
has an inverse in
if and only if
. The last condition is in turn equivalent to
. Two matrices
are equivalent if and only if
![]() |
where ,
.
Let . The matrix
![]() |
is called the characteristic matrix of and
is called the characteristic polynomial of
. For every polynomial of the form
![]() |
there is an such that
![]() |
Such is, for example, the matrix
![]() | (*) |
The characteristic polynomials of two similar matrices coincide. However, the fact that two matrices have identical characteristic polynomials does not necessarily entail the fact that the matrices are similar. A similarity criterion is: Two matrices are similar if and only if the polynomial matrices
and
are equivalent. The set of all matrices from
having a given characteristic polynomial
is partitioned into a finite number of classes of similar matrices; this set reduces to a single class if and only if
does not have multiple factors in
.
Let ,
,
, and suppose that
, where
. Then
is called an eigen vector of
and
is called an eigen value of
. An element
is an eigen value of a matrix
if and only if it is a root of the characteristic polynomial of
. The set of all columns
such that
for a fixed eigen value
of
is a subspace of
. The dimension of this subspace equals the defect (or deficiency)
of the matrix
(
, where
is the rank of
). The number
does not exceed the multiplicity of the root
, but need not coincide with it. A matrix
is similar to a diagonal matrix if and only if it has
linearly independent eigen vectors. If for an
,
![]() |
and the roots are distinct, then the following holds:
is similar to a diagonal matrix if and only if for each
,
, the defect of
coincides with
. In particular, every matrix with
distinct eigen values is similar to a diagonal matrix. Over an algebraically closed field every matrix from
is similar to some triangular matrix from
. The Hamilton–Cayley theorem: If
is the characteristic polynomial of a matrix
, then
is the null matrix.
By definition, the minimum polynomial of a matrix is the polynomial
with the properties:
)
;
) the coefficient of the leading term equals 1; and
) if
and the degree of
is smaller than the degree of
, then
. Every matrix has a unique minimum polynomial. If
and
, then the minimum polynomial
of
divides
. The minimum polynomial and the characteristic polynomial of
coincide with the last invariant factor, and, respectively, the product of all invariant factors, of the matrix
. The minimum polynomial of
equals
![]() |
where is the greatest common divisor of the minors (cf. Minor) of order
of the matrix
. A matrix
is similar to a diagonal matrix over the field
if and only if its minimum polynomial is a product of distinct linear factors in the ring
.
A matrix is called nilpotent if
for some integer
. A matrix
is nilpotent if and only if
. Every nilpotent matrix from
is similar to some triangular matrix with zeros on the diagonal.
References
[1] | V.V. Voevodin, "Algèbre linéare" , MIR (1976) (Translated from Russian) |
[2] | F.R. [F.R. Gantmakher] Gantmacher, "The theory of matrices" , 1 , Chelsea, reprint (1977) (Translated from Russian) |
[3] | A.I. Kostrikin, "Introduction to algebra" , Springer (1982) (Translated from Russian) |
[4] | A.G. Kurosh, "Higher algebra" , MIR (1972) (Translated from Russian) |
[5] | A.I. Mal'tsev, "Foundations of linear algebra" , Freeman (1963) (Translated from Russian) |
[6] | I.B. Proskuryakov, "Higher algebra. Linear algebra, polynomials, general algebra" , Pergamon (1965) (Translated from Russian) |
[7] | A.R.I. Tyshkevich, "Linear algebra and analytic geometry" , Minsk (1976) (In Russian) |
[8] | R. Bellman, "Introduction to matrix analysis" , McGraw-Hill (1970) |
[9] | N. Bourbaki, "Elements of mathematics. Algebra: Algebraic structures. Linear algebra" , 1 , Addison-Wesley (1974) pp. Chapt.1;2 (Translated from French) |
[10] | P. Lancaster, "Theory of matrices" , Acad. Press (1969) |
[11] | M. Marcus, H. Minc, "A survey of matrix theory and matrix inequalities" , Allyn & Bacon (1964) |
Comments
The result on canonical polynomial matrices quoted above has a natural generalization to matrices over principal ideal domains. An -matrix
over a principal ideal domain
of the form
![]() | (a1) |
with divisible by
,
, is said to be in Smith canonical form. Every matrix
over a principal ideal domain
is equivalent to one in Smith canonical form in the sense that there are an
-matrix
and an
-matrix
such that
and
are invertible in
and
, respectively, and such that
is in Smith canonical form.
A matrix of the form (a1) is said to be in companion form, especially in linear systems and control theory where the theory of (polynomial) matrices finds many applications.
References
[a1] | P.M. Cohn, "Algebra" , 1 , Wiley (1974) pp. Sect. 10.6 |
[a2] | W.A. Wolovich, "Linear multivariable systems" , Springer (1974) |
[a3] | R.E. Kalman, P.L. Falb, M.A. Arbib, "Topics in mathematical systems theory" , Prentice-Hall (1969) |
Matrix. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Matrix&oldid=19628