Linear transformation
A mapping of a vector space into itself under which the image of the sum of two vectors is the sum of their images and the image of the product of a vector by a number is the product of the image of the vector by this number. If is a vector space,
is a linear transformation defined in it,
are any vectors of the space, and
is any number (an element of a field), then
![]() |
If a vector space has finite dimension
,
is a basis of it,
are the coordinates of an arbitrary vector
in this basis, and
are the coordinates of its image
, then the coordinates of the vector
are expressed in terms of the coordinates of the vector
by linear homogeneous functions:
![]() |
![]() |
![]() |
The matrix
![]() |
is called the matrix of the linear transformation in the basis
. Its columns consist of the coordinates of the images of the basis vectors. If
![]() |
is the transition matrix from the basis to a basis
:
![]() |
![]() |
![]() |
then in the basis the matrix
of the linear transformation
is
.
The sum of two linear transformations and
is the transformation
such that for any vector
,
![]() |
The product of a linear transformation by a number
is the transformation
for which
for every vector
.
The product of a linear transformation by a linear transformation
is the transformation
![]() |
The sum of two linear transformations, the product of a linear transformation by a number, and the product of two linear transformations (in any order) are themselves linear transformations. The linear transformations form an algebra. In the case a finite-dimensional space of dimension , the algebra of its linear transformations is isomorphic to the algebra of square matrices of order
with as entries the elements of the field over which the vector space is constructed.
A linear transformation under which a vector space is mapped onto itself is said to be invertible if there is a transformation
such that
![]() |
where is the identity transformation. The transformation
is a linear transformation and is called the inverse transformation of
. A linear transformation defined on a finite-dimensional vector space is invertible if and only if the determinant of its matrix in some (and therefore in any) basis is non-zero. If
is the matrix of an invertible linear transformation
, then the matrix of the inverse
is
. The invertible linear transformations form a group with respect to multiplication. In the case of a vector space of finite dimension
, this group is isomorphic to the group of non-singular square matrices of order
.
A subspace of a vector space
is called an invariant subspace with respect to a linear transformation
if
for every vector
. A non-zero vector
is called an eigen vector of a linear transformation
, corresponding to the eigen value
, if
. In the case of a finite-dimensional space over the field of complex numbers (or, more generally, an algebraically closed field) every linear transformation has an eigen vector (a one-dimensional invariant subspace). In the case of a finite-dimensional space over the field of real numbers every linear transformation has a one-dimensional or two-dimensional invariant subspace.
A linear transformation , defined on a finite-dimensional vector space
, is called a diagonalizable linear transformation if there is a basis in
in which the matrix of this transformation has diagonal form (cf. Diagonal matrix). In other words, a linear transformation is diagonalizable if the space has a basis consisting of eigen vectors of this linear transformation. However, not every linear transformation has a basis of eigen vectors even in a space over the field of complex numbers. E.g. the linear transformation of a two-dimensional space given by the matrix
![]() |
has a unique one-dimensional invariant subspace with basis .
In a finite-dimensional vector space over the field of complex numbers (or any algebraically closed field) there is for every linear transformation a basis in which the matrix of this transformation has block form (cf. Block-diagonal operator), with Jordan blocks on the main diagonal and zeros elsewhere. A Jordan block of the first order consists of one number ; a Jordan block of order
is a square matrix of order
of the form
![]() |
The numbers are the eigen values of the matrix of the linear transformation. To one and the same
several blocks of the same order may correspond, as well as blocks of different orders. The matrix consisting of Jordan blocks is called the Jordan normal form (or Jordan canonical form) of the matrix.
A linear transformation , defined on a Euclidean (unitary) space (cf. Unitary space), is said to be self-adjoint (respectively, Hermitian) if for any two vectors
one has
(respectively,
).
A linear transformation, defined on a finite-dimensional Euclidean (unitary) space, is self-adjoint (Hermitian) if and only if its matrix in some (and therefore any) orthonormal basis is symmetric (respectively, Hermitian, cf. Hermitian matrix; Symmetric matrix). A self-adjoint (Hermitian) linear transformation, defined on a finite-dimensional Euclidean (respectively, unitary) space, has an orthonormal basis in which its matrix has diagonal form. The main diagonal consists of the (always real) eigen values of the matrix.
A linear transformation , defined on a Euclidean (unitary) space
, is said to be isometric or orthogonal (respectively, unitary) if for every vector
,
![]() |
A linear transformation, defined on a finite-dimensional Euclidean (unitary) space, is isometric (respectively, unitary) if and only if its matrix in some (and then in any) orthonormal basis is orthogonal (respectively, unitary, cf. Orthogonal matrix; Unitary matrix). For every isometric linear transformation, defined on a finite-dimensional Euclidean space, there is an orthonormal basis in which the matrix of the transformation consists of blocks of the first and second orders on its main diagonal. The blocks of the first order are the real eigen values of the matrix
of the transformation, equal to
and
, and the blocks of the second order have the form
![]() |
where and
are the real and imaginary parts of the complex eigen value
of
, and the other entries of
are zero. For every unitary transformation, defined on a unitary space, there is an orthonormal basis in which the matrix of this transformation is diagonal and on the main diagonal there are numbers of absolute value 1.
Every linear transformation, defined on a finite-dimensional Euclidean (unitary) space, is the product of a self-adjoint (cf. Self-adjoint linear transformation) and an isometric linear transformation (respectively, of a Hermitian and a unitary linear transformation).
References
[1] | P.S. Aleksandrov, "Lectures on analytical geometry" , Moscow (1968) (In Russian) |
[2] | I.M. Gel'fand, "Lectures on linear algebra" , Moscow (1971) (In Russian) |
[3] | N.V. Efimov, E.R. Rozendorn, "Linear algebra and multi-dimensional geometry" , Moscow (1970) (In Russian) |
[4] | P.R. Halmos, "Finite-dimensional vector spaces" , v. Nostrand (1958) |
Comments
References
[a1] | N. Bourbaki, "Elements of mathematics" , 2. Linear and multilinear algebra , Addison-Wesley (1973) pp. Chapt. 2 (Translated from French) |
[a2] | N. Jacobson, "Lectures in abstract algebra" , 2. Linear algebra , v. Nostrand (1953) |
Linear transformation. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Linear_transformation&oldid=19621