Namespaces
Variants
Actions

Difference between revisions of "Linear transformation"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m
Line 1: Line 1:
A mapping of a [[Vector space|vector space]] into itself under which the image of the sum of two vectors is the sum of their images and the image of the product of a vector by a number is the product of the image of the vector by this number. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l0595201.png" /> is a vector space, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l0595202.png" /> is a linear transformation defined in it, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l0595203.png" /> are any vectors of the space, and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l0595204.png" /> is any number (an element of a field), then
+
A mapping of a
 +
[[Vector space|vector space]] into itself under which the image of the
 +
sum of two vectors is the sum of their images and the image of the
 +
product of a vector by a number is the product of the image of the
 +
vector by this number. If $V$ is a vector space, $f$ is a linear
 +
transformation defined in it, $x,y$ are any vectors of the space, and
 +
$\lambda$ is any number (an element of a field), then  
 +
$$f(x+y)=f(x)+f(y),\quad f(\lambda x) = \lambda f(x).$$
 +
If a vector
 +
space $V$ has finite dimension $n$, $e_1,\dots,e_n$ is a
 +
[[Basis|basis]] of it, $x_1,\dots,x_n$ are the coordinates of an arbitrary vector
 +
$x$ in this basis, and $y_1,\dots,y_n$ are the coordinates of its image $y=f(x)$, then
 +
the coordinates of the vector $y$ are expressed in terms of the
 +
coordinates of the vector $x$ by linear homogeneous functions:
 +
$$y_1=a_{11}x_1+\dots+a_{1n}x_n,$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l0595205.png" /></td> </tr></table>
+
$$\dots\dots$$
  
If a vector space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l0595206.png" /> has finite dimension <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l0595207.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l0595208.png" /> is a [[Basis|basis]] of it, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l0595209.png" /> are the coordinates of an arbitrary vector <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952010.png" /> in this basis, and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952011.png" /> are the coordinates of its image <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952012.png" />, then the coordinates of the vector <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952013.png" /> are expressed in terms of the coordinates of the vector <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952014.png" /> by linear homogeneous functions:
+
$$y_n=a_{n1}x_1+\dots+a_{nn}x_n.$$
 +
The matrix
 +
$$A=\begin{pmatrix}a_{11}&\cdots&a_{1n}\\ \cdots&\cdots&\cdots \\ a_{n1}&\cdots&a_{nn}\end{pmatrix}$$
 +
is called the matrix of the linear
 +
transformation $f$ in the basis $e_1,\dots,e_n$. Its columns consist of the
 +
coordinates of the images of the basis vectors. If
 +
$$C=\begin{pmatrix}c_{11}&\cdots&c_{1n}\\ \cdots&\cdots&\cdots \\ c_{n1}&\cdots&c_{nn}\end{pmatrix}$$
 +
is the
 +
transition matrix from the basis $e_1,\dots,e_n$ to a basis $e_1',\dots,e_n'$:
 +
$$e_1'=c_{11}e_1+\dots+c_{n1}e_n$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952015.png" /></td> </tr></table>
+
$$\cdots\cdots$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952016.png" /></td> </tr></table>
+
$$e_n'=c_{1n}e_1+\dots+c_{nn}e_n,$$
 +
then in the basis $e_1,\dots,e_n$ the matrix $B$ of the linear
 +
transformation $f$ is $B=C^{-1}AC$.
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952017.png" /></td> </tr></table>
+
The sum of two linear transformations $f$ and $g$ is the
 +
transformation $h$ such that for any vector $x\in V$,
 +
$$h(x)=f(x)+g(x)$$
 +
The product of
 +
a linear transformation $f$ by a number $\lambda$ is the transformation $k$
 +
for which $k(x)=\lambda f(x)$ for every vector $x\in V$.
  
The matrix
+
The product of a linear transformation $f$ by a linear transformation
 +
$g$ is the transformation
 +
$$l(x) = g(f(x))$$
 +
The sum of two linear transformations,
 +
the product of a linear transformation by a number, and the product of
 +
two linear transformations (in any order) are themselves linear
 +
transformations. The linear transformations form an algebra. In the
 +
case a finite-dimensional space of dimension $n$, the algebra of its
 +
linear transformations is isomorphic to the algebra of square matrices
 +
of order $n$ with as entries the elements of the field over which the
 +
vector space is constructed.
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952018.png" /></td> </tr></table>
+
A linear transformation $f$ under which a vector space is mapped onto
 +
itself is said to be invertible if there is a transformation $f^{-1}$ such
 +
that
 +
$$ff^{-1} = f^{-1}f = E$$
 +
where $E$ is the identity transformation. The
 +
transformation $f^{-1}$ is a linear transformation and is called the
 +
inverse transformation of $f$. A linear transformation defined on a
 +
finite-dimensional vector space is invertible if and only if the
 +
[[Determinant|determinant]] of its matrix in some (and therefore in
 +
any) basis is non-zero. If $A$ is the matrix of an invertible linear
 +
transformation $f$, then the matrix of the inverse $f^{-1}$ is $A^{-1}$. The
 +
invertible linear transformations form a group with respect to
 +
multiplication. In the case of a vector space of finite dimension $n$,
 +
this group is isomorphic to the group of non-singular square matrices
 +
of order $n$.
  
is called the matrix of the linear transformation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952019.png" /> in the basis <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952020.png" />. Its columns consist of the coordinates of the images of the basis vectors. If
+
A subspace $V'$ of a vector space $V$ is called an invariant subspace
 +
with respect to a linear transformation $f$ if $f(x)\in V'$ for every vector
 +
$x\in V'$. A non-zero vector $x\in V$ is called an eigen vector of a linear
 +
transformation $f$, corresponding to the eigen value $\lambda$, if $f(x)=\lambda x$. In
 +
the case of a finite-dimensional space over the field of complex
 +
numbers (or, more generally, an algebraically closed field) every
 +
linear transformation has an eigen vector (a one-dimensional invariant
 +
subspace). In the case of a finite-dimensional space over the field of
 +
real numbers every linear transformation has a one-dimensional or
 +
two-dimensional invariant subspace.
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952021.png" /></td> </tr></table>
+
A linear transformation $f$, defined on a finite-dimensional vector
 +
space $V$, is called a diagonalizable linear transformation if there
 +
is a basis in $V$ in which the matrix of this transformation has
 +
diagonal form (cf.
 +
[[Diagonal matrix|Diagonal matrix]]). In other words, a linear
 +
transformation is diagonalizable if the space has a basis consisting
 +
of eigen vectors of this linear transformation. However, not every
 +
linear transformation has a basis of eigen vectors even in a space
 +
over the field of complex numbers. E.g. the linear transformation of a
 +
two-dimensional space given by the matrix
 +
$$\begin{pmatrix}1&1\\0&1\end{pmatrix}$$
 +
has a unique
 +
one-dimensional invariant subspace with basis $(1,0)$.
  
is the transition matrix from the basis <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952022.png" /> to a basis <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952023.png" />:
+
In a finite-dimensional vector space over the field of complex numbers
 +
(or any algebraically closed field) there is for every linear
 +
transformation a basis in which the matrix of this transformation has
 +
block form (cf.
 +
[[Block-diagonal operator|Block-diagonal operator]]), with Jordan
 +
blocks on the main diagonal and zeros elsewhere. A Jordan block of the
 +
first order consists of one number $\lambda$; a Jordan block of order $k$ is
 +
a square matrix of order $k$ of the form
 +
$$\begin{pmatrix}\lambda & 1 & 0 &\dots& 0\\0&\lambda&1&\dots&.\\.&.&.&\dots&1\\0&0&0&\dots&\lambda\end{pmatrix}$$
 +
The numbers $\lambda$ are the
 +
eigen values of the matrix of the linear transformation. To one and
 +
the same $\lambda$ several blocks of the same order may correspond, as well
 +
as blocks of different orders. The matrix consisting of Jordan blocks
 +
is called the Jordan normal form (or Jordan canonical form) of the
 +
matrix.
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952024.png" /></td> </tr></table>
+
A linear transformation $f$, defined on a Euclidean (unitary) space
 +
(cf.
 +
[[Unitary space|Unitary space]]), is said to be self-adjoint
 +
(respectively, Hermitian) if for any two vectors $x,y\in V$ one has $(x,f(y))=(y,f(x))$
 +
(respectively, $(x,f(y))=\overline{(y,f(x))}\;$).
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952025.png" /></td> </tr></table>
+
A linear transformation, defined on a finite-dimensional Euclidean
 +
(unitary) space, is self-adjoint (Hermitian) if and only if its matrix
 +
$A$ in some (and therefore any) orthonormal basis is symmetric
 +
(respectively, Hermitian, cf.
 +
[[Hermitian matrix|Hermitian matrix]];
 +
[[Symmetric matrix|Symmetric matrix]]). A self-adjoint (Hermitian)
 +
linear transformation, defined on a finite-dimensional Euclidean
 +
(respectively, unitary) space, has an orthonormal basis in which its
 +
matrix has diagonal form. The main diagonal consists of the (always
 +
real) eigen values of the matrix.
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952026.png" /></td> </tr></table>
+
A linear transformation $f$, defined on a Euclidean (unitary) space
 +
$V$, is said to be isometric or orthogonal (respectively, unitary) if
 +
for every vector $x\in V$,
 +
$$||f(x)|| = ||x||.$$
 +
A linear transformation, defined on a
 +
finite-dimensional Euclidean (unitary) space, is isometric
 +
(respectively, unitary) if and only if its matrix $A$ in some (and
 +
then in any) orthonormal basis is orthogonal (respectively, unitary,
 +
cf.
 +
[[Orthogonal matrix|Orthogonal matrix]];
 +
[[Unitary matrix|Unitary matrix]]). For every isometric linear
 +
transformation, defined on a finite-dimensional Euclidean space, there
 +
is an orthonormal basis in which the matrix of the transformation
 +
consists of blocks of the first and second orders on its main
 +
diagonal. The blocks of the first order are the real eigen values of
 +
the matrix $A$ of the transformation, equal to $+1$ and $-1$, and the
 +
blocks of the second order have the form
 +
$$\begin{pmatrix}\cos \phi &-\sin \phi\\ \sin \phi&\cos \phi\end{pmatrix}$$
 +
where $\cos \phi$ and $\sin \phi$ are
 +
the real and imaginary parts of the complex eigen value $\lambda=\cos \phi+i\sin \phi$ of $A$,
 +
and the other entries of $A$ are zero. For every unitary
 +
transformation, defined on a unitary space, there is an orthonormal
 +
basis in which the matrix of this transformation is diagonal and on
 +
the main diagonal there are numbers of absolute value 1.
  
then in the basis <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952027.png" /> the matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952028.png" /> of the linear transformation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952029.png" /> is <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952030.png" />.
+
Every linear transformation, defined on a finite-dimensional Euclidean
 
+
(unitary) space, is the product of a self-adjoint (cf.
The sum of two linear transformations <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952031.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952032.png" /> is the transformation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952033.png" /> such that for any vector <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952034.png" />,
+
[[Self-adjoint linear transformation|Self-adjoint linear
 
+
transformation]]) and an isometric linear transformation
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952035.png" /></td> </tr></table>
+
(respectively, of a Hermitian and a unitary linear transformation).
 
 
The product of a linear transformation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952036.png" /> by a number <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952037.png" /> is the transformation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952038.png" /> for which <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952039.png" /> for every vector <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952040.png" />.
 
 
 
The product of a linear transformation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952041.png" /> by a linear transformation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952042.png" /> is the transformation
 
 
 
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952043.png" /></td> </tr></table>
 
 
 
The sum of two linear transformations, the product of a linear transformation by a number, and the product of two linear transformations (in any order) are themselves linear transformations. The linear transformations form an algebra. In the case a finite-dimensional space of dimension <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952044.png" />, the algebra of its linear transformations is isomorphic to the algebra of square matrices of order <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952045.png" /> with as entries the elements of the field over which the vector space is constructed.
 
 
 
A linear transformation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952046.png" /> under which a vector space is mapped onto itself is said to be invertible if there is a transformation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952047.png" /> such that
 
 
 
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952048.png" /></td> </tr></table>
 
 
 
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952049.png" /> is the identity transformation. The transformation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952050.png" /> is a linear transformation and is called the inverse transformation of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952051.png" />. A linear transformation defined on a finite-dimensional vector space is invertible if and only if the [[Determinant|determinant]] of its matrix in some (and therefore in any) basis is non-zero. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952052.png" /> is the matrix of an invertible linear transformation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952053.png" />, then the matrix of the inverse <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952054.png" /> is <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952055.png" />. The invertible linear transformations form a group with respect to multiplication. In the case of a vector space of finite dimension <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952056.png" />, this group is isomorphic to the group of non-singular square matrices of order <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952057.png" />.
 
 
 
A subspace <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952058.png" /> of a vector space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952059.png" /> is called an invariant subspace with respect to a linear transformation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952060.png" /> if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952061.png" /> for every vector <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952062.png" />. A non-zero vector <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952063.png" /> is called an eigen vector of a linear transformation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952064.png" />, corresponding to the eigen value <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952065.png" />, if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952066.png" />. In the case of a finite-dimensional space over the field of complex numbers (or, more generally, an algebraically closed field) every linear transformation has an eigen vector (a one-dimensional invariant subspace). In the case of a finite-dimensional space over the field of real numbers every linear transformation has a one-dimensional or two-dimensional invariant subspace.
 
 
 
A linear transformation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952067.png" />, defined on a finite-dimensional vector space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952068.png" />, is called a diagonalizable linear transformation if there is a basis in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952069.png" /> in which the matrix of this transformation has diagonal form (cf. [[Diagonal matrix|Diagonal matrix]]). In other words, a linear transformation is diagonalizable if the space has a basis consisting of eigen vectors of this linear transformation. However, not every linear transformation has a basis of eigen vectors even in a space over the field of complex numbers. E.g. the linear transformation of a two-dimensional space given by the matrix
 
 
 
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952070.png" /></td> </tr></table>
 
 
 
has a unique one-dimensional invariant subspace with basis <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952071.png" />.
 
 
 
In a finite-dimensional vector space over the field of complex numbers (or any algebraically closed field) there is for every linear transformation a basis in which the matrix of this transformation has block form (cf. [[Block-diagonal operator|Block-diagonal operator]]), with Jordan blocks on the main diagonal and zeros elsewhere. A Jordan block of the first order consists of one number <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952072.png" />; a Jordan block of order <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952074.png" /> is a square matrix of order <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952075.png" /> of the form
 
 
 
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952076.png" /></td> </tr></table>
 
 
 
The numbers <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952077.png" /> are the eigen values of the matrix of the linear transformation. To one and the same <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952078.png" /> several blocks of the same order may correspond, as well as blocks of different orders. The matrix consisting of Jordan blocks is called the Jordan normal form (or Jordan canonical form) of the matrix.
 
 
 
A linear transformation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952079.png" />, defined on a Euclidean (unitary) space (cf. [[Unitary space|Unitary space]]), is said to be self-adjoint (respectively, Hermitian) if for any two vectors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952080.png" /> one has <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952081.png" /> (respectively, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952082.png" />).
 
 
 
A linear transformation, defined on a finite-dimensional Euclidean (unitary) space, is self-adjoint (Hermitian) if and only if its matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952083.png" /> in some (and therefore any) orthonormal basis is symmetric (respectively, Hermitian, cf. [[Hermitian matrix|Hermitian matrix]]; [[Symmetric matrix|Symmetric matrix]]). A self-adjoint (Hermitian) linear transformation, defined on a finite-dimensional Euclidean (respectively, unitary) space, has an orthonormal basis in which its matrix has diagonal form. The main diagonal consists of the (always real) eigen values of the matrix.
 
 
 
A linear transformation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952084.png" />, defined on a Euclidean (unitary) space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952085.png" />, is said to be isometric or orthogonal (respectively, unitary) if for every vector <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952086.png" />,
 
 
 
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952087.png" /></td> </tr></table>
 
 
 
A linear transformation, defined on a finite-dimensional Euclidean (unitary) space, is isometric (respectively, unitary) if and only if its matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952088.png" /> in some (and then in any) orthonormal basis is orthogonal (respectively, unitary, cf. [[Orthogonal matrix|Orthogonal matrix]]; [[Unitary matrix|Unitary matrix]]). For every isometric linear transformation, defined on a finite-dimensional Euclidean space, there is an orthonormal basis in which the matrix of the transformation consists of blocks of the first and second orders on its main diagonal. The blocks of the first order are the real eigen values of the matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952089.png" /> of the transformation, equal to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952090.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952091.png" />, and the blocks of the second order have the form
 
 
 
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952092.png" /></td> </tr></table>
 
 
 
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952093.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952094.png" /> are the real and imaginary parts of the complex eigen value <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952095.png" /> of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952096.png" />, and the other entries of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059520/l05952097.png" /> are zero. For every unitary transformation, defined on a unitary space, there is an orthonormal basis in which the matrix of this transformation is diagonal and on the main diagonal there are numbers of absolute value 1.
 
 
 
Every linear transformation, defined on a finite-dimensional Euclidean (unitary) space, is the product of a self-adjoint (cf. [[Self-adjoint linear transformation|Self-adjoint linear transformation]]) and an isometric linear transformation (respectively, of a Hermitian and a unitary linear transformation).
 
  
 
====References====
 
====References====
<table><TR><TD valign="top">[1]</TD> <TD valign="top"> P.S. Aleksandrov,   "Lectures on analytical geometry" , Moscow (1968) (In Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top"> I.M. Gel'fand,   "Lectures on linear algebra" , Moscow (1971) (In Russian)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top"> N.V. Efimov,   E.R. Rozendorn,   "Linear algebra and multi-dimensional geometry" , Moscow (1970) (In Russian)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top"> P.R. Halmos,   "Finite-dimensional vector spaces" , v. Nostrand (1958)</TD></TR></table>
+
<table><TR><TD valign="top">[1]</TD> <TD
 +
valign="top"> P.S. Aleksandrov, "Lectures on analytical geometry" ,
 +
Moscow (1968) (In Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD
 +
valign="top"> I.M. Gel'fand, "Lectures on linear algebra" , Moscow
 +
(1971) (In Russian)</TD></TR><TR><TD valign="top">[3]</TD> <TD
 +
valign="top"> N.V. Efimov, E.R. Rozendorn, "Linear algebra and
 +
multi-dimensional geometry" , Moscow (1970) (In
 +
Russian)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">
 +
P.R. Halmos, "Finite-dimensional vector spaces" , v. Nostrand
 +
(1958)</TD></TR></table>
  
  
Line 86: Line 176:
  
 
====References====
 
====References====
<table><TR><TD valign="top">[a1]</TD> <TD valign="top"> N. Bourbaki,   "Elements of mathematics" , '''2. Linear and multilinear algebra''' , Addison-Wesley (1973) pp. Chapt. 2 (Translated from French)</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top"> N. Jacobson,   "Lectures in abstract algebra" , '''2. Linear algebra''' , v. Nostrand (1953)</TD></TR></table>
+
<table><TR><TD valign="top">[a1]</TD> <TD
 +
valign="top"> N. Bourbaki, "Elements of mathematics" , '''2. Linear
 +
and multilinear algebra''' , Addison-Wesley (1973) pp. Chapt. 2
 +
(Translated from French)</TD></TR><TR><TD valign="top">[a2]</TD> <TD
 +
valign="top"> N. Jacobson, "Lectures in abstract algebra" ,
 +
'''2. Linear algebra''' , v. Nostrand (1953)</TD></TR></table>

Revision as of 10:23, 12 November 2011

A mapping of a vector space into itself under which the image of the sum of two vectors is the sum of their images and the image of the product of a vector by a number is the product of the image of the vector by this number. If $V$ is a vector space, $f$ is a linear transformation defined in it, $x,y$ are any vectors of the space, and $\lambda$ is any number (an element of a field), then $$f(x+y)=f(x)+f(y),\quad f(\lambda x) = \lambda f(x).$$ If a vector space $V$ has finite dimension $n$, $e_1,\dots,e_n$ is a basis of it, $x_1,\dots,x_n$ are the coordinates of an arbitrary vector $x$ in this basis, and $y_1,\dots,y_n$ are the coordinates of its image $y=f(x)$, then the coordinates of the vector $y$ are expressed in terms of the coordinates of the vector $x$ by linear homogeneous functions: $$y_1=a_{11}x_1+\dots+a_{1n}x_n,$$

$$\dots\dots$$

$$y_n=a_{n1}x_1+\dots+a_{nn}x_n.$$ The matrix $$A=\begin{pmatrix}a_{11}&\cdots&a_{1n}\\ \cdots&\cdots&\cdots \\ a_{n1}&\cdots&a_{nn}\end{pmatrix}$$ is called the matrix of the linear transformation $f$ in the basis $e_1,\dots,e_n$. Its columns consist of the coordinates of the images of the basis vectors. If $$C=\begin{pmatrix}c_{11}&\cdots&c_{1n}\\ \cdots&\cdots&\cdots \\ c_{n1}&\cdots&c_{nn}\end{pmatrix}$$ is the transition matrix from the basis $e_1,\dots,e_n$ to a basis $e_1',\dots,e_n'$: $$e_1'=c_{11}e_1+\dots+c_{n1}e_n$$

$$\cdots\cdots$$

$$e_n'=c_{1n}e_1+\dots+c_{nn}e_n,$$ then in the basis $e_1,\dots,e_n$ the matrix $B$ of the linear transformation $f$ is $B=C^{-1}AC$.

The sum of two linear transformations $f$ and $g$ is the transformation $h$ such that for any vector $x\in V$, $$h(x)=f(x)+g(x)$$ The product of a linear transformation $f$ by a number $\lambda$ is the transformation $k$ for which $k(x)=\lambda f(x)$ for every vector $x\in V$.

The product of a linear transformation $f$ by a linear transformation $g$ is the transformation $$l(x) = g(f(x))$$ The sum of two linear transformations, the product of a linear transformation by a number, and the product of two linear transformations (in any order) are themselves linear transformations. The linear transformations form an algebra. In the case a finite-dimensional space of dimension $n$, the algebra of its linear transformations is isomorphic to the algebra of square matrices of order $n$ with as entries the elements of the field over which the vector space is constructed.

A linear transformation $f$ under which a vector space is mapped onto itself is said to be invertible if there is a transformation $f^{-1}$ such that $$ff^{-1} = f^{-1}f = E$$ where $E$ is the identity transformation. The transformation $f^{-1}$ is a linear transformation and is called the inverse transformation of $f$. A linear transformation defined on a finite-dimensional vector space is invertible if and only if the determinant of its matrix in some (and therefore in any) basis is non-zero. If $A$ is the matrix of an invertible linear transformation $f$, then the matrix of the inverse $f^{-1}$ is $A^{-1}$. The invertible linear transformations form a group with respect to multiplication. In the case of a vector space of finite dimension $n$, this group is isomorphic to the group of non-singular square matrices of order $n$.

A subspace $V'$ of a vector space $V$ is called an invariant subspace with respect to a linear transformation $f$ if $f(x)\in V'$ for every vector $x\in V'$. A non-zero vector $x\in V$ is called an eigen vector of a linear transformation $f$, corresponding to the eigen value $\lambda$, if $f(x)=\lambda x$. In the case of a finite-dimensional space over the field of complex numbers (or, more generally, an algebraically closed field) every linear transformation has an eigen vector (a one-dimensional invariant subspace). In the case of a finite-dimensional space over the field of real numbers every linear transformation has a one-dimensional or two-dimensional invariant subspace.

A linear transformation $f$, defined on a finite-dimensional vector space $V$, is called a diagonalizable linear transformation if there is a basis in $V$ in which the matrix of this transformation has diagonal form (cf. Diagonal matrix). In other words, a linear transformation is diagonalizable if the space has a basis consisting of eigen vectors of this linear transformation. However, not every linear transformation has a basis of eigen vectors even in a space over the field of complex numbers. E.g. the linear transformation of a two-dimensional space given by the matrix $$\begin{pmatrix}1&1\\0&1\end{pmatrix}$$ has a unique one-dimensional invariant subspace with basis $(1,0)$.

In a finite-dimensional vector space over the field of complex numbers (or any algebraically closed field) there is for every linear transformation a basis in which the matrix of this transformation has block form (cf. Block-diagonal operator), with Jordan blocks on the main diagonal and zeros elsewhere. A Jordan block of the first order consists of one number $\lambda$; a Jordan block of order $k$ is a square matrix of order $k$ of the form $$\begin{pmatrix}\lambda & 1 & 0 &\dots& 0\\0&\lambda&1&\dots&.\\.&.&.&\dots&1\\0&0&0&\dots&\lambda\end{pmatrix}$$ The numbers $\lambda$ are the eigen values of the matrix of the linear transformation. To one and the same $\lambda$ several blocks of the same order may correspond, as well as blocks of different orders. The matrix consisting of Jordan blocks is called the Jordan normal form (or Jordan canonical form) of the matrix.

A linear transformation $f$, defined on a Euclidean (unitary) space (cf. Unitary space), is said to be self-adjoint (respectively, Hermitian) if for any two vectors $x,y\in V$ one has $(x,f(y))=(y,f(x))$ (respectively, $(x,f(y))=\overline{(y,f(x))}\;$).

A linear transformation, defined on a finite-dimensional Euclidean (unitary) space, is self-adjoint (Hermitian) if and only if its matrix $A$ in some (and therefore any) orthonormal basis is symmetric (respectively, Hermitian, cf. Hermitian matrix; Symmetric matrix). A self-adjoint (Hermitian) linear transformation, defined on a finite-dimensional Euclidean (respectively, unitary) space, has an orthonormal basis in which its matrix has diagonal form. The main diagonal consists of the (always real) eigen values of the matrix.

A linear transformation $f$, defined on a Euclidean (unitary) space $V$, is said to be isometric or orthogonal (respectively, unitary) if for every vector $x\in V$, $$||f(x)|| = ||x||.$$ A linear transformation, defined on a finite-dimensional Euclidean (unitary) space, is isometric (respectively, unitary) if and only if its matrix $A$ in some (and then in any) orthonormal basis is orthogonal (respectively, unitary, cf. Orthogonal matrix; Unitary matrix). For every isometric linear transformation, defined on a finite-dimensional Euclidean space, there is an orthonormal basis in which the matrix of the transformation consists of blocks of the first and second orders on its main diagonal. The blocks of the first order are the real eigen values of the matrix $A$ of the transformation, equal to $+1$ and $-1$, and the blocks of the second order have the form $$\begin{pmatrix}\cos \phi &-\sin \phi\\ \sin \phi&\cos \phi\end{pmatrix}$$ where $\cos \phi$ and $\sin \phi$ are the real and imaginary parts of the complex eigen value $\lambda=\cos \phi+i\sin \phi$ of $A$, and the other entries of $A$ are zero. For every unitary transformation, defined on a unitary space, there is an orthonormal basis in which the matrix of this transformation is diagonal and on the main diagonal there are numbers of absolute value 1.

Every linear transformation, defined on a finite-dimensional Euclidean (unitary) space, is the product of a self-adjoint (cf. Self-adjoint linear transformation) and an isometric linear transformation (respectively, of a Hermitian and a unitary linear transformation).

References

[1] P.S. Aleksandrov, "Lectures on analytical geometry" , Moscow (1968) (In Russian)
[2] I.M. Gel'fand, "Lectures on linear algebra" , Moscow (1971) (In Russian)
[3] N.V. Efimov, E.R. Rozendorn, "Linear algebra and

multi-dimensional geometry" , Moscow (1970) (In

Russian)
[4]

P.R. Halmos, "Finite-dimensional vector spaces" , v. Nostrand

(1958)


Comments

References

[a1] N. Bourbaki, "Elements of mathematics" , 2. Linear

and multilinear algebra , Addison-Wesley (1973) pp. Chapt. 2

(Translated from French)
[a2] N. Jacobson, "Lectures in abstract algebra" , 2. Linear algebra , v. Nostrand (1953)
How to Cite This Entry:
Linear transformation. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Linear_transformation&oldid=15932
This article was adapted from an original article by A.S. Parkhomenko (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article