Namespaces
Variants
Actions

Linear transformation

From Encyclopedia of Mathematics
Revision as of 10:23, 12 November 2011 by Ulf Rehmann (talk | contribs)
Jump to: navigation, search

A mapping of a vector space into itself under which the image of the sum of two vectors is the sum of their images and the image of the product of a vector by a number is the product of the image of the vector by this number. If $V$ is a vector space, $f$ is a linear transformation defined in it, $x,y$ are any vectors of the space, and $\lambda$ is any number (an element of a field), then $$f(x+y)=f(x)+f(y),\quad f(\lambda x) = \lambda f(x).$$ If a vector space $V$ has finite dimension $n$, $e_1,\dots,e_n$ is a basis of it, $x_1,\dots,x_n$ are the coordinates of an arbitrary vector $x$ in this basis, and $y_1,\dots,y_n$ are the coordinates of its image $y=f(x)$, then the coordinates of the vector $y$ are expressed in terms of the coordinates of the vector $x$ by linear homogeneous functions: $$y_1=a_{11}x_1+\dots+a_{1n}x_n,$$

$$\dots\dots$$

$$y_n=a_{n1}x_1+\dots+a_{nn}x_n.$$ The matrix $$A=\begin{pmatrix}a_{11}&\cdots&a_{1n}\\ \cdots&\cdots&\cdots \\ a_{n1}&\cdots&a_{nn}\end{pmatrix}$$ is called the matrix of the linear transformation $f$ in the basis $e_1,\dots,e_n$. Its columns consist of the coordinates of the images of the basis vectors. If $$C=\begin{pmatrix}c_{11}&\cdots&c_{1n}\\ \cdots&\cdots&\cdots \\ c_{n1}&\cdots&c_{nn}\end{pmatrix}$$ is the transition matrix from the basis $e_1,\dots,e_n$ to a basis $e_1',\dots,e_n'$: $$e_1'=c_{11}e_1+\dots+c_{n1}e_n$$

$$\cdots\cdots$$

$$e_n'=c_{1n}e_1+\dots+c_{nn}e_n,$$ then in the basis $e_1,\dots,e_n$ the matrix $B$ of the linear transformation $f$ is $B=C^{-1}AC$.

The sum of two linear transformations $f$ and $g$ is the transformation $h$ such that for any vector $x\in V$, $$h(x)=f(x)+g(x)$$ The product of a linear transformation $f$ by a number $\lambda$ is the transformation $k$ for which $k(x)=\lambda f(x)$ for every vector $x\in V$.

The product of a linear transformation $f$ by a linear transformation $g$ is the transformation $$l(x) = g(f(x))$$ The sum of two linear transformations, the product of a linear transformation by a number, and the product of two linear transformations (in any order) are themselves linear transformations. The linear transformations form an algebra. In the case a finite-dimensional space of dimension $n$, the algebra of its linear transformations is isomorphic to the algebra of square matrices of order $n$ with as entries the elements of the field over which the vector space is constructed.

A linear transformation $f$ under which a vector space is mapped onto itself is said to be invertible if there is a transformation $f^{-1}$ such that $$ff^{-1} = f^{-1}f = E$$ where $E$ is the identity transformation. The transformation $f^{-1}$ is a linear transformation and is called the inverse transformation of $f$. A linear transformation defined on a finite-dimensional vector space is invertible if and only if the determinant of its matrix in some (and therefore in any) basis is non-zero. If $A$ is the matrix of an invertible linear transformation $f$, then the matrix of the inverse $f^{-1}$ is $A^{-1}$. The invertible linear transformations form a group with respect to multiplication. In the case of a vector space of finite dimension $n$, this group is isomorphic to the group of non-singular square matrices of order $n$.

A subspace $V'$ of a vector space $V$ is called an invariant subspace with respect to a linear transformation $f$ if $f(x)\in V'$ for every vector $x\in V'$. A non-zero vector $x\in V$ is called an eigen vector of a linear transformation $f$, corresponding to the eigen value $\lambda$, if $f(x)=\lambda x$. In the case of a finite-dimensional space over the field of complex numbers (or, more generally, an algebraically closed field) every linear transformation has an eigen vector (a one-dimensional invariant subspace). In the case of a finite-dimensional space over the field of real numbers every linear transformation has a one-dimensional or two-dimensional invariant subspace.

A linear transformation $f$, defined on a finite-dimensional vector space $V$, is called a diagonalizable linear transformation if there is a basis in $V$ in which the matrix of this transformation has diagonal form (cf. Diagonal matrix). In other words, a linear transformation is diagonalizable if the space has a basis consisting of eigen vectors of this linear transformation. However, not every linear transformation has a basis of eigen vectors even in a space over the field of complex numbers. E.g. the linear transformation of a two-dimensional space given by the matrix $$\begin{pmatrix}1&1\\0&1\end{pmatrix}$$ has a unique one-dimensional invariant subspace with basis $(1,0)$.

In a finite-dimensional vector space over the field of complex numbers (or any algebraically closed field) there is for every linear transformation a basis in which the matrix of this transformation has block form (cf. Block-diagonal operator), with Jordan blocks on the main diagonal and zeros elsewhere. A Jordan block of the first order consists of one number $\lambda$; a Jordan block of order $k$ is a square matrix of order $k$ of the form $$\begin{pmatrix}\lambda & 1 & 0 &\dots& 0\\0&\lambda&1&\dots&.\\.&.&.&\dots&1\\0&0&0&\dots&\lambda\end{pmatrix}$$ The numbers $\lambda$ are the eigen values of the matrix of the linear transformation. To one and the same $\lambda$ several blocks of the same order may correspond, as well as blocks of different orders. The matrix consisting of Jordan blocks is called the Jordan normal form (or Jordan canonical form) of the matrix.

A linear transformation $f$, defined on a Euclidean (unitary) space (cf. Unitary space), is said to be self-adjoint (respectively, Hermitian) if for any two vectors $x,y\in V$ one has $(x,f(y))=(y,f(x))$ (respectively, $(x,f(y))=\overline{(y,f(x))}\;$).

A linear transformation, defined on a finite-dimensional Euclidean (unitary) space, is self-adjoint (Hermitian) if and only if its matrix $A$ in some (and therefore any) orthonormal basis is symmetric (respectively, Hermitian, cf. Hermitian matrix; Symmetric matrix). A self-adjoint (Hermitian) linear transformation, defined on a finite-dimensional Euclidean (respectively, unitary) space, has an orthonormal basis in which its matrix has diagonal form. The main diagonal consists of the (always real) eigen values of the matrix.

A linear transformation $f$, defined on a Euclidean (unitary) space $V$, is said to be isometric or orthogonal (respectively, unitary) if for every vector $x\in V$, $$||f(x)|| = ||x||.$$ A linear transformation, defined on a finite-dimensional Euclidean (unitary) space, is isometric (respectively, unitary) if and only if its matrix $A$ in some (and then in any) orthonormal basis is orthogonal (respectively, unitary, cf. Orthogonal matrix; Unitary matrix). For every isometric linear transformation, defined on a finite-dimensional Euclidean space, there is an orthonormal basis in which the matrix of the transformation consists of blocks of the first and second orders on its main diagonal. The blocks of the first order are the real eigen values of the matrix $A$ of the transformation, equal to $+1$ and $-1$, and the blocks of the second order have the form $$\begin{pmatrix}\cos \phi &-\sin \phi\\ \sin \phi&\cos \phi\end{pmatrix}$$ where $\cos \phi$ and $\sin \phi$ are the real and imaginary parts of the complex eigen value $\lambda=\cos \phi+i\sin \phi$ of $A$, and the other entries of $A$ are zero. For every unitary transformation, defined on a unitary space, there is an orthonormal basis in which the matrix of this transformation is diagonal and on the main diagonal there are numbers of absolute value 1.

Every linear transformation, defined on a finite-dimensional Euclidean (unitary) space, is the product of a self-adjoint (cf. Self-adjoint linear transformation) and an isometric linear transformation (respectively, of a Hermitian and a unitary linear transformation).

References

[1] P.S. Aleksandrov, "Lectures on analytical geometry" , Moscow (1968) (In Russian)
[2] I.M. Gel'fand, "Lectures on linear algebra" , Moscow (1971) (In Russian)
[3] N.V. Efimov, E.R. Rozendorn, "Linear algebra and

multi-dimensional geometry" , Moscow (1970) (In

Russian)
[4]

P.R. Halmos, "Finite-dimensional vector spaces" , v. Nostrand

(1958)


Comments

References

[a1] N. Bourbaki, "Elements of mathematics" , 2. Linear

and multilinear algebra , Addison-Wesley (1973) pp. Chapt. 2

(Translated from French)
[a2] N. Jacobson, "Lectures in abstract algebra" , 2. Linear algebra , v. Nostrand (1953)
How to Cite This Entry:
Linear transformation. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Linear_transformation&oldid=19621
This article was adapted from an original article by A.S. Parkhomenko (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article