Namespaces
Variants
Actions

Difference between revisions of "Vector space"

From Encyclopedia of Mathematics
Jump to: navigation, search
(some TeX)
(TeX done and links)
 
(7 intermediate revisions by the same user not shown)
Line 1: Line 1:
{{TEX|part}}
+
{{TEX|done}}
  
''linear space, over a field <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v0965201.png" />''
+
''[[Linear space]], over a [[field]] $K$''
  
An [[Abelian group]] $E$, written additively, in which a multiplication of the elements by scalars is defined, i.e. a mapping
+
An [[Abelian group]] $E$, written additively, in which a multiplication of the elements by scalars is defined, i.e. a [[mapping]]
 +
\begin{equation}
 +
K\times E\rightarrow E\colon (\lambda,x)\rightarrow \lambda x,
 +
\end{equation}
 +
which satisfies the following axioms ($x,y\in E$; $\lambda,\mu,1\in K$):
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v0965203.png" /></td> </tr></table>
+
# $\lambda (x+y) = \lambda x + \lambda y$;
 +
# $(\lambda+\mu)x = \lambda x + \mu x$;
 +
# $(\lambda\mu)x=\lambda(\mu x)$;
 +
# $1x=x$.
  
which satisfies the following axioms (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v0965204.png" />; <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v0965205.png" />):
+
Axioms 1.–4. imply the following important properties of a [[vector space]] ($0\in E$):
 +
<ol start="5">
 +
<li>$\lambda 0=0$;</li>
 +
<li>$0x=0$;</li>
 +
<li>$(-1)x=-x$.</li>
 +
</ol>
 +
The elements of the vector space are called its points, or vectors; the elements of $K$ are called scalars.
  
1) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v0965206.png" />;
+
The vector spaces most often employed in mathematics and in its applications are those over the field $C$ of [[complex number]]s and over the field $R$ of [[real number]]s; they are said to be complex, respectively real, vector spaces.
  
2) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v0965207.png" />;
+
The axioms of vector spaces express algebraic properties of many classes of objects which are frequently encountered in analysis. The most fundamental and the earliest examples of vector spaces are the $n$-dimensional [[Euclidean space]]s. Of almost equal importance are many function spaces: [[Continuous functions, space of|spaces of continuous functions]], spaces of [[measurable function]]s, spaces of [[summable function]]s, spaces of [[analytic function]]s, and spaces of [[Function of bounded variation|functions of bounded variation]].
  
3) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v0965208.png" />;
+
The concept of a vector space is a special case of the concept of a [[module]] over a [[ring]] — a vector space is a [[unitary module]] over a field. A unitary module over a non-commutative [[skew-field]] is also called a vector space over a skew-field; the theory of such vector spaces is much more difficult than the theory of vector spaces over a field.
  
4) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v0965209.png" />.
+
One important task connected with vector spaces is the study of the geometry of vector spaces, i.e. the study of lines in vector spaces, flat and [[convex set]]s in vector spaces, vector subspaces, and bases in vector spaces.
  
Axioms 1)–4) imply the following important properties of a vector space (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652010.png" />):
+
A vector subspace, or simply a subspace, of a vector space $E$ is a subset $F\subset E$ that is closed with respect to the operations of addition and multiplication by a scalar. A subspace, considered apart from its ambient space, is a vector space over the ground field.
  
5) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652011.png" />;
+
The straight line passing through two points $x$ and $y$ of a vector space $E$ is the set of elements $z\in E$ of the form $z=\lambda x + (1-\lambda)y$, $\lambda\in K$. A set $G\in E$ is said to be a flat set if, in addition to two arbitrary points, it also contains the straight line passing through these points. Any flat set is obtained from some subspace by a parallel shift: $G=x+F$; this means that each element $z\in G$ can be uniquely represented in the form $z=x+y$, $y\in F$, and that this equation realizes a one-to-one correspondence between $F$ and $G$.
  
6) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652012.png" />;
+
The totality of all shifts $F_x=x+F$ of a given subspace $F$ forms a vector space over $K$, called the [[quotient space]] $E/F$, if the operations are defined as follows:
 +
\begin{equation}
 +
F_x+F_y=F_{x+y};\quad\lambda F_x=F_{\lambda x},\quad\lambda\in K.
 +
\end{equation}
  
7) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652013.png" />.
+
Let $M=\{x_\alpha\}_{\alpha\in A}$ be an arbitrary set of vectors in $E$. A linear combination of the vectors $x_\alpha\in E$ is a vector $x$ defined by an expression
 +
\begin{equation}
 +
x=\sum_{\alpha}\lambda_\alpha x_\alpha,\quad\lambda_\alpha\in K,
 +
\end{equation}
 +
in which only a finite number of coefficients differ from zero. The set of all linear combinations of vectors of the set $M$ is the smallest subspace containing $M$ and is said to be the linear envelope of the set $M$. A linear combination is said to be trivial if all coefficients $\lambda_\alpha$ are zero. The set $M$ is said to be a linearly independent set if all non-trivial linear combinations of vectors in $M$ are non-zero.
  
The elements of the vector space are called its points, or vectors; the elements of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652014.png" /> are called scalars.
+
Any linearly independent set is contained in some maximal linearly independent set $M_0$, i.e. in a set which ceases to be linearly independent after any element in $E$ has been added to it.
  
The vector spaces most often employed in mathematics and in its applications are those over the field <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652015.png" /> of complex numbers and over the field <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652016.png" /> of real numbers; they are said to be complex, respectively real, vector spaces.
+
Each element $x\in E$ may be uniquely represented as a linear combination of elements of a maximal linearly independent set:
 +
\begin{equation}
 +
x=\sum_{\alpha}\lambda_\alpha x_\alpha,\quad x_\alpha\in M_0.
 +
\end{equation}
  
The axioms of vector spaces express algebraic properties of many classes of objects which are frequently encountered in analysis. The most fundamental and the earliest examples of vector spaces are the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652017.png" />-dimensional Euclidean spaces. Of almost equal importance are many function spaces: spaces of continuous functions, spaces of measurable functions, spaces of summable functions, spaces of analytic functions, and spaces of functions of bounded variation.
+
A maximal linearly independent set is said to be a [[basis]] (an algebraic basis) of the vector space for this reason. All bases of a given vector space have the same [[cardinality]], which is known as the dimension of the vector space. If this cardinality is finite, the space is said to be finite-dimensional; otherwise it is known as an [[Infinite-dimensional space|infinite-dimensional vector space]].
  
The concept of a vector space is a special case of the concept of a module over a ring — a vector space is a [[Unitary module|unitary module]] over a field. A unitary module over a non-commutative skew-field is also called a vector space over a skew-field; the theory of such vector spaces is much more difficult than the theory of vector spaces over a field.
+
The field $K$ may be considered as a one-dimensional vector space over itself; a basis of this vector space is a single element, which may be any element other than zero. A finite-dimensional vector space with a basis of $n$ elements is known as an $n$-dimensional space.
  
One important task connected with vector spaces is the study of the geometry of vector spaces, i.e. the study of lines in vector spaces, flat and convex sets in vector spaces, vector subspaces, and bases in vector spaces.
+
The theory of [[convex set]]s plays an important part in the theory of real and complex vector spaces. A set $M$ in a real vector space is said to be a convex set if for any two points $x$, $y$ in it the segment $tx + (1-t)y$, $t\in [0,1]$, also belongs to $M$.
  
A vector subspace, or simply a subspace, of a vector space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652018.png" /> is a subset <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652019.png" /> that is closed with respect to the operations of addition and multiplication by a scalar. A subspace, considered apart from its ambient space, is a vector space over the ground field.
+
The theory of linear functionals on vector spaces and the related theory of duality are important parts of the theory of vector spaces. Let $E$ be a vector space over a field $K$. An additive and homogeneous mapping $f\colon E\rightarrow K$, i.e.
 +
\begin{equation}
 +
f(x+y)=f(x)+f(y),\quad f(\lambda x)=\lambda f(x),
 +
\end{equation}
 +
is said to be a linear functional on $E$. The set $E^*$ of all linear functionals on $E$ forms a vector space over $K$ with respect to the operations
 +
\begin{equation}
 +
(f_1+f_2)(x)=f_1(x)+f_2(x),\quad (\lambda f)(x)=\lambda f(x),\quad x\in E,\quad\lambda\in K,\quad f_1,f_2,f\in E^*.
 +
\end{equation}
  
The straight line passing through two points <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652020.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652021.png" /> of a vector space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652022.png" /> is the set of elements <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652023.png" /> of the form <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652024.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652025.png" />. A set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652026.png" /> is said to be a flat set if, in addition to two arbitrary points, it also contains the straight line passing through these points. Any flat set is obtained from some subspace by a parallel shift: <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652027.png" />; this means that each element <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652028.png" /> can be uniquely represented in the form <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652029.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652030.png" />, and that this equation realizes a one-to-one correspondence between <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652031.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652032.png" />.
+
This vector space is said to be the conjugate, or dual, space of $E$. Several geometrical notions are connected with the concept of a conjugate space. Let $D\subset E$ (respectively, $\Gamma\subset E^*$); the set
 
+
\begin{equation}
The totality of all shifts <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652033.png" /> of a given subspace <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652034.png" /> forms a vector space over <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652035.png" />, called the quotient space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652036.png" />, if the operations are defined as follows:
+
D^\perp=\left\{f\in E^*\colon f(x)=0\quad \text{for all}\; x\in D\right\},
 
+
\end{equation}
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652037.png" /></td> </tr></table>
+
or
 
+
\begin{equation}
Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652038.png" /> be an arbitrary set of vectors in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652039.png" />. A linear combination of the vectors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652040.png" /> is a vector <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652041.png" /> defined by an expression
+
\Gamma_\perp=\left\{x\in E\colon f(x)=0\quad \text{for all}\; f\in\Gamma\right\},
 
+
\end{equation}
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652042.png" /></td> </tr></table>
+
is said to be the [[annihilator]] or orthogonal complement of $D$ (respectively, of $\Gamma$); here $D^\perp$ and $\Gamma_\perp$ are subspaces of $E^*$ and $E$, respectively. If $f$ is a non-zero element of $E^*$, $\{ f\}_\perp$ is a maximal proper linear subspace in $E$, which is sometimes called a hypersubspace; a shift of such a subspace is said to be a [[hyperplane]] in $E$; thus, any hyperplane has the form
 
+
\begin{equation}
in which only a finite number of coefficients differ from zero. The set of all linear combinations of vectors of the set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652043.png" /> is the smallest subspace containing <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652044.png" /> and is said to be the linear envelope of the set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652045.png" />. A linear combination is said to be trivial if all coefficients <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652046.png" /> are zero. The set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652047.png" /> is said to be a linearly independent set if all non-trivial linear combinations of vectors in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652048.png" /> are non-zero.
+
\{x\colon f(x)=\lambda\},\quad f\neq 0,\quad f\in E^*,\quad\lambda\in K.
 
+
\end{equation}
Any linearly independent set is contained in some maximal linearly independent set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652049.png" />, i.e. in a set which ceases to be linearly independent after any element in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652050.png" /> has been added to it.
 
 
 
Each element <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652051.png" /> may be uniquely represented as a linear combination of elements of a maximal linearly independent set:
 
 
 
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652052.png" /></td> </tr></table>
 
 
 
A maximal linearly independent set is said to be a [[Basis|basis]] (an algebraic basis) of the vector space for this reason. All bases of a given vector space have the same cardinality, which is known as the dimension of the vector space. If this cardinality is finite, the space is said to be finite-dimensional; otherwise it is known as an infinite-dimensional vector space.
 
 
 
The field <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652053.png" /> may be considered as a one-dimensional vector space over itself; a basis of this vector space is a single element, which may be any element other than zero. A finite-dimensional vector space with a basis of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652054.png" /> elements is known as an <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652056.png" />-dimensional space.
 
 
 
The theory of convex sets plays an important part in the theory of real and complex vector spaces (cf. also [[Convex set|Convex set]]). A set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652057.png" /> in a real vector space is said to be a convex set if for any two points <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652058.png" /> in it the segment <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652059.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652060.png" />, also belongs to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652061.png" />.
 
 
 
The theory of linear functionals on vector spaces and the related theory of duality are important parts of the theory of vector spaces. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652062.png" /> be a vector space over a field <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652063.png" />. An additive and homogeneous mapping <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652064.png" />, i.e.
 
 
 
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652065.png" /></td> </tr></table>
 
 
 
is said to be a linear functional on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652066.png" />. The set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652067.png" /> of all linear functionals on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652068.png" /> forms a vector space over <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652069.png" /> with respect to the operations
 
 
 
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652070.png" /></td> </tr></table>
 
 
 
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652071.png" /></td> </tr></table>
 
 
 
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652072.png" /></td> </tr></table>
 
 
 
This vector space is said to be the conjugate, or dual, space of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652073.png" />. Several geometrical notions are connected with the concept of a conjugate space. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652074.png" /> (respectively, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652075.png" />); the set
 
 
 
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652076.png" /></td> </tr></table>
 
 
 
or <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652077.png" />, is said to be the annihilator or orthogonal complement of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652078.png" /> (respectively, of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652079.png" />); here <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652080.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652081.png" /> are subspaces of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652082.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652083.png" />, respectively. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652084.png" /> is a non-zero element of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652085.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652086.png" /> is a maximal proper linear subspace in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652087.png" />, which is sometimes called a hypersubspace; a shift of such a subspace is said to be a hyperplane in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652088.png" />; thus, any hyperplane has the form
 
 
 
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652089.png" /></td> </tr></table>
 
 
 
If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652090.png" /> is a subspace of the vector space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652091.png" />, there exist natural isomorphisms between <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652092.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652093.png" /> and between <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652094.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652095.png" />.
 
 
 
A subset <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652096.png" /> is said to be a total subset over <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652097.png" /> if its annihilator contains only the zero element, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652098.png" />.
 
 
 
Each linearly independent set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v09652099.png" /> can be brought into correspondence with a conjugate set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520100.png" />, i.e. with a set such that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520101.png" /> (the [[Kronecker symbol|Kronecker symbol]]) for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520102.png" />. The set of pairs <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520103.png" /> is said to be a biorthogonal system. If the set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520104.png" /> is a basis in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520105.png" />, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520106.png" /> is total over <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520107.png" />.
 
  
An important chapter in the theory of vector spaces is the theory of linear transformations of these spaces. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520108.png" /> be two vector spaces over the same field <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520109.png" />. Then an additive and homogeneous mapping <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520110.png" /> of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520111.png" /> into <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520112.png" />, i.e.
+
If $F$ is a subspace of the vector space $E$, there exist natural [[isomorphism]]s between $F^*$ and $E^*/F^\perp$ and between $(E/F)^*$ and $F^\perp$.
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520113.png" /></td> </tr></table>
+
A subset $\Gamma\subset E^*$ is said to be a total subset over $E$ if its annihilator contains only the zero element, $\Gamma_\perp=\{ 0\}$.
  
is said to be a linear mapping or linear operator, mapping <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520114.png" /> into <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520115.png" /> (or from <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520116.png" /> into <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520117.png" />). A special case of this concept is a linear functional, or a linear operator from <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520118.png" /> into <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520119.png" />. An example of a linear mapping is the natural mapping from <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520120.png" /> into the quotient space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520121.png" />, which establishes a one-to-one correspondence between each element <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520122.png" /> and the flat set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520123.png" />. The set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520124.png" /> of all linear operators <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520125.png" /> forms a vector space with respect to the operations
+
Each linearly independent set $\{ x_\alpha\}_{\alpha\in A}\subset E$ can be brought into correspondence with a conjugate set $\{ f_\alpha\}_{\alpha\in A}\subset E^*$, i.e. with a set such that $f_\alpha(x_\beta)=\delta_{\alpha\beta}$ (the [[Kronecker symbol]]) for all $\alpha$, $\beta\in A$. The set of pairs $\{ x_\alpha,f_\alpha\}$ is said to be a [[biorthogonal system]]. If the set $\{ x_\alpha\}$ is a basis in $E$, then $\{ f_\alpha\}$ is total over $E$.
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520126.png" /></td> </tr></table>
+
An important chapter in the theory of vector spaces is the theory of linear transformations of these spaces. Let $E_1$, $E_2$ be two vector spaces over the same field $K$. Then an additive and homogeneous mapping $T$ of $E_1$ into $E_2$, i.e.
 +
\begin{equation}
 +
T(x+y)=Tx+Ty;\quad T(\lambda x)=\lambda Tx;\quad x,y\in E_1,
 +
\end{equation}
 +
is said to be a linear mapping or [[linear operator]], mapping $E_1$ into $E_2$ (or from $E_1$ into $E_2$). A special case of this concept is a [[linear functional]], or a linear operator from $E_1$ into $K$. An example of a linear mapping is the natural mapping from $E$ into the quotient space $E/F$, which establishes a one-to-one correspondence between each element $x\in E$ and the flat set $F_x\in E/F$. The set $\mathcal{L}(E_1,E_2)$ of all linear operators $T\colon E_1\rightarrow E_2$ forms a vector space with respect to the operations
 +
\begin{equation}
 +
(T_1+T_2)x=T_1x+T_2y;\quad (\lambda T)x=\lambda Tx;\quad x\in E_1;\quad\lambda\in K;\quad T_1,T_2,T\in\mathcal{L}(E_1,E_2).
 +
\end{equation}
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520127.png" /></td> </tr></table>
+
Two vector spaces $E_1$ and $E_2$ are said to be isomorphic if there exists a linear operator (an [[isomorphism]]) which realizes a one-to-one correspondence between their elements. $E_1$ and $E_2$ are isomorphic if and only if their bases have equal cardinalities.
  
Two vector spaces <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520128.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520129.png" /> are said to be isomorphic if there exists a linear operator (an  "isomorphism" ) which realizes a one-to-one correspondence between their elements. <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520130.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520131.png" /> are isomorphic if and only if their bases have equal cardinalities.
+
Let $T$ be a linear operator from $E_1$ into $E_2$. The conjugate linear operator, or dual linear operator, of $T$ is the linear operator $T^*$ from $E_1^*$ into $E_2^*$ defined by the equation
 +
\begin{equation}
 +
(T^*\phi)(X)=\phi(Tx)\quad\text{for all}\;x\in E_1,\phi\in E_2^*.
 +
\end{equation}
  
Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520132.png" /> be a linear operator from <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520133.png" /> into <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520134.png" />. The conjugate linear operator, or dual linear operator, of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520135.png" /> is the linear operator <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520136.png" /> from <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520137.png" /> into <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520138.png" /> defined by the equation
+
The relations $T^{*^{-1}}(0)=[T(E_1)]^{\perp}$, $T^*(E_2^*)=[T^{-1}(0)]^{\perp}$ are valid, which imply that $T^*$ is an isomorphism if and only if $T$ is an isomorphism.
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520139.png" /></td> </tr></table>
+
The theory of [[Bilinear mapping|bilinear]] and [[multilinear mapping]]s of vector spaces is closely connected with the theory of linear mappings of vector spaces.
  
The relations <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520140.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520141.png" /> are valid, which imply that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520142.png" /> is an isomorphism if and only if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520143.png" /> is an isomorphism.
+
Problems of extending linear mappings are an important group of problems in the theory of vector spaces. Let $F$ be a subspace of a vector space $E_1$, let $E_2$ be a linear space over the same field as $E_1$ and let $T_0$ be a linear mapping from $F$ into $E_2$; it is required to find an extension $T$ of $T_0$ which is defined on all of $E_1$ and which is a linear mapping from $E_1$ into $E_2$. Such an extension always exists, but the problem may prove to be unsolvable owing to additional limitations imposed on the functions (which are related to supplementary structures in the vector space, e.g. to the topology or to an order relation). Examples of solutions of extension problems are the [[Hahn–Banach theorem]] and theorems on the extension of positive functionals in spaces with a cone.
 
 
The theory of bilinear and multilinear mappings of vector spaces is closely connected with the theory of linear mappings of vector spaces (cf. [[Bilinear mapping|Bilinear mapping]]; [[Multilinear mapping|Multilinear mapping]]).
 
 
 
Problems of extending linear mappings are an important group of problems in the theory of vector spaces. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520144.png" /> be a subspace of a vector space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520145.png" />, let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520146.png" /> be a linear space over the same field as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520147.png" /> and let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520148.png" /> be a linear mapping from <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520149.png" /> into <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520150.png" />; it is required to find an extension <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520151.png" /> of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520152.png" /> which is defined on all of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520153.png" /> and which is a linear mapping from <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520154.png" /> into <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520155.png" />. Such an extension always exists, but the problem may prove to be unsolvable owing to additional limitations imposed on the functions (which are related to supplementary structures in the vector space, e.g. to the topology or to an order relation). Examples of solutions of extension problems are the [[Hahn–Banach theorem|Hahn–Banach theorem]] and theorems on the extension of positive functionals in spaces with a cone.
 
  
 
An important branch of the theory of vector spaces is the theory of operations over a vector space, i.e. methods for constructing new vector spaces from given vector spaces. Examples of such operations are the well-known methods of taking a subspace and forming the quotient space by it. Other important operations include the construction of direct sums, direct products and tensor products of vector spaces.
 
An important branch of the theory of vector spaces is the theory of operations over a vector space, i.e. methods for constructing new vector spaces from given vector spaces. Examples of such operations are the well-known methods of taking a subspace and forming the quotient space by it. Other important operations include the construction of direct sums, direct products and tensor products of vector spaces.
  
Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520156.png" /> be a family of vector spaces over a field <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520157.png" />. The set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520158.png" /> which is the product of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520159.png" /> can be made into a vector space over <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520160.png" /> by introducing the operations
+
Let $\{E_\alpha\}_{\alpha\in I}$ be a family of vector spaces over a field $K$. The set $E$ which is the product of $E_\alpha$ can be made into a vector space over $K$ by introducing the operations
 +
\begin{equation}
 +
(x_\alpha)+(y_\alpha)=(x_\alpha+y_\alpha);\quad\lambda(x_\alpha)=(\lambda x_\alpha);\quad \lambda\in K;\quad x_\alpha,y_\alpha\in E_\alpha,\quad \alpha\in I.
 +
\end{equation}
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520161.png" /></td> </tr></table>
+
The resulting vector space $E$ is called the direct product of the vector spaces $E_\alpha$, and is written as $\prod_{\alpha\in I}E_\alpha$. The subspace of the vector space $E$ consisting of all sequences $(x_\alpha)$ for each of which the set $\{\alpha\colon x_\alpha\neq 0\}$ is finite, is said to be the direct sum of the vector spaces $E_\alpha$, and is written as $\sum_{\alpha}E_\alpha$ or $\oplus_{\alpha}E_\alpha$. These two notions coincide if the number of terms is finite. In this case one uses the notations:
 +
\begin{equation}
 +
E_1+\ldots+E_n,\quad E_1\oplus\ldots\oplus E_n\quad\text{or}\quad E_1\times\ldots\times E_n.
 +
\end{equation}
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520162.png" /></td> </tr></table>
+
Let $E_1$ and $E_2$ be vector spaces over the same field $K$; let $E_1'$, $E_2'$ be total subspaces of the vector spaces $E_1^*$, $E_2^*$, and let $E_1\Box E_2$ be the vector space with the set of all elements of the space $E_1\times E_2$ as its basis. Each element $x\Box y\in E_1\Box E_2$ can be brought into correspondence with a bilinear function $b=T(x,y)$ on $E_1'\times E_2'$ using the formula $b(f,g)=f(x)g(y)$, $f\in E_1'$, $g\in E_2'$. This mapping on the basis vectors $x\Box y\in E_1\Box E_2$ may be extended to a linear mapping $T$ from the vector space $E_1\Box E_2$ into the vector space of all bilinear functionals on $E_1'\times E_2'$. Let $E_0=T^{-1}(0)$. The tensor product of $E_1$ and $E_2$ is the quotient space $E_1\otimes E_2=(E_1\Box E_2)/E_0$; the image of the element $x\Box y$ is written as $x\otimes y$. The vector space $E_1\otimes E_2$ is isomorphic to the vector space of bilinear functionals on $E_1'\times E_2'$ (cf. [[Tensor product|Tensor product]] of vector spaces).
  
The resulting vector space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520163.png" /> is called the direct product of the vector spaces <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520164.png" />, and is written as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520165.png" />. The subspace of the vector space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520166.png" /> consisting of all sequences <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520167.png" /> for each of which the set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520168.png" /> is finite, is said to be the direct sum of the vector spaces <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520169.png" />, and is written as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520170.png" /> or <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520171.png" />. These two notions coincide if the number of terms is finite. In this case one uses the notations:
+
The most interesting part of the theory of vector spaces is the theory of finite-dimensional vector spaces. However, the concept of infinite-dimensional vector spaces has also proved fruitful and has interesting applications, especially in the theory of topological vector spaces, i.e. vector spaces equipped with topologies fitted in some manner to its algebraic structure.
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520172.png" /></td> </tr></table>
 
  
or
 
 
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520173.png" /></td> </tr></table>
 
 
Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520174.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520175.png" /> be vector spaces over the same field <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520176.png" />; let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520177.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520178.png" /> be total subspaces of the vector spaces <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520179.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520180.png" />, and let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520181.png" /> be the vector space with the set of all elements of the space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520182.png" /> as its basis. Each element <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520183.png" /> can be brought into correspondence with a bilinear function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520184.png" /> on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520185.png" /> using the formula <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520186.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520187.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520188.png" />. This mapping on the basis vectors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520189.png" /> may be extended to a linear mapping <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520190.png" /> from the vector space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520191.png" /> into the vector space of all bilinear functionals on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520192.png" />. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520193.png" />. The tensor product of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520194.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520195.png" /> is the quotient space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520196.png" />; the image of the element <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520197.png" /> is written as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520198.png" />. The vector space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520199.png" /> is isomorphic to the vector space of bilinear functionals on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096520/v096520200.png" /> (cf. [[Tensor product|Tensor product]] of vector spaces).
 
 
The most interesting part of the theory of vector spaces is the theory of finite-dimensional vector spaces. However, the concept of infinite-dimensional vector spaces has also proved fruitful and has interesting applications, especially in the theory of topological vector spaces, i.e. vector spaces equipped with topologies fitted in some manner to its algebraic structure.
 
  
 
====References====
 
====References====
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  N. Bourbaki,  "Elements of mathematics. Algebra: Algebraic structures. Linear algebra" , '''1''' , Addison-Wesley  (1974)  pp. Chapt.1;2  (Translated from French)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  D.A. Raikov,  "Vector spaces" , Noordhoff  (1965)  (Translated from Russian)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  M.M. Day,  "Normed linear spaces" , Springer  (1958)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  R.E. Edwards,  "Functional analysis: theory and applications" , Holt, Rinehart &amp; Winston  (1965)</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top">  P.R. Halmos,  "Finite-dimensional vector spaces" , v. Nostrand  (1958)</TD></TR><TR><TD valign="top">[6]</TD> <TD valign="top">  I.M. Glazman,  Yu.I. Lyubich,  "Finite-dimensional linear analysis: a systematic presentation in problem form" , M.I.T.  (1974)  (Translated from Russian)</TD></TR></table>
 
  
 
+
<table>
 
+
<TR><TD valign="top">[1]</TD> <TD valign="top">  N. Bourbaki,  "Elements of mathematics. Algebra: Algebraic structures. Linear algebra" , '''1''' , Addison-Wesley  (1974)  pp. Chapt.1;2  (Translated from French)</TD></TR>
====Comments====
+
<TR><TD valign="top">[2]</TD> <TD valign="top">  D.A. Raikov,  "Vector spaces" , Noordhoff  (1965)  (Translated from Russian)</TD></TR>
 
+
<TR><TD valign="top">[3]</TD> <TD valign="top">  M.M. Day,  "Normed linear spaces" , Springer  (1958)</TD></TR>
 
+
<TR><TD valign="top">[4]</TD> <TD valign="top">  R.E. Edwards,  "Functional analysis: theory and applications" , Holt, Rinehart &amp; Winston  (1965)</TD></TR>
====References====
+
<TR><TD valign="top">[5]</TD> <TD valign="top">  P.R. Halmos,  "Finite-dimensional vector spaces" , v. Nostrand  (1958)</TD></TR>
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  G. Strang,  "Linear algebra and its applications" , Harcourt, Brace, Jovanovich  (1988)</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  B. Noble,  J.W. Daniel,  "Applied linear algebra" , Prentice-Hall  (1977)</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top">  W. Noll,  "Finite dimensional spaces" , M. Nijhoff  (1987)  pp. Sect. 2.7</TD></TR></table>
+
<TR><TD valign="top">[6]</TD> <TD valign="top">  I.M. Glazman,  Yu.I. Lyubich,  "Finite-dimensional linear analysis: a systematic presentation in problem form" , M.I.T.  (1974)  (Translated from Russian)</TD></TR>
 +
<TR><TD valign="top">[7]</TD> <TD valign="top">  G. Strang,  "Linear algebra and its applications" , Harcourt, Brace, Jovanovich  (1988)</TD></TR>
 +
<TR><TD valign="top">[8]</TD> <TD valign="top">  B. Noble,  J.W. Daniel,  "Applied linear algebra" , Prentice-Hall  (1977)</TD></TR>
 +
<TR><TD valign="top">[9]</TD> <TD valign="top">  W. Noll,  "Finite dimensional spaces" , M. Nijhoff  (1987)  pp. Sect. 2.7</TD></TR>
 +
</table>

Latest revision as of 11:28, 21 June 2016


Linear space, over a field $K$

An Abelian group $E$, written additively, in which a multiplication of the elements by scalars is defined, i.e. a mapping \begin{equation} K\times E\rightarrow E\colon (\lambda,x)\rightarrow \lambda x, \end{equation} which satisfies the following axioms ($x,y\in E$; $\lambda,\mu,1\in K$):

  1. $\lambda (x+y) = \lambda x + \lambda y$;
  2. $(\lambda+\mu)x = \lambda x + \mu x$;
  3. $(\lambda\mu)x=\lambda(\mu x)$;
  4. $1x=x$.

Axioms 1.–4. imply the following important properties of a vector space ($0\in E$):

  1. $\lambda 0=0$;
  2. $0x=0$;
  3. $(-1)x=-x$.

The elements of the vector space are called its points, or vectors; the elements of $K$ are called scalars.

The vector spaces most often employed in mathematics and in its applications are those over the field $C$ of complex numbers and over the field $R$ of real numbers; they are said to be complex, respectively real, vector spaces.

The axioms of vector spaces express algebraic properties of many classes of objects which are frequently encountered in analysis. The most fundamental and the earliest examples of vector spaces are the $n$-dimensional Euclidean spaces. Of almost equal importance are many function spaces: spaces of continuous functions, spaces of measurable functions, spaces of summable functions, spaces of analytic functions, and spaces of functions of bounded variation.

The concept of a vector space is a special case of the concept of a module over a ring — a vector space is a unitary module over a field. A unitary module over a non-commutative skew-field is also called a vector space over a skew-field; the theory of such vector spaces is much more difficult than the theory of vector spaces over a field.

One important task connected with vector spaces is the study of the geometry of vector spaces, i.e. the study of lines in vector spaces, flat and convex sets in vector spaces, vector subspaces, and bases in vector spaces.

A vector subspace, or simply a subspace, of a vector space $E$ is a subset $F\subset E$ that is closed with respect to the operations of addition and multiplication by a scalar. A subspace, considered apart from its ambient space, is a vector space over the ground field.

The straight line passing through two points $x$ and $y$ of a vector space $E$ is the set of elements $z\in E$ of the form $z=\lambda x + (1-\lambda)y$, $\lambda\in K$. A set $G\in E$ is said to be a flat set if, in addition to two arbitrary points, it also contains the straight line passing through these points. Any flat set is obtained from some subspace by a parallel shift: $G=x+F$; this means that each element $z\in G$ can be uniquely represented in the form $z=x+y$, $y\in F$, and that this equation realizes a one-to-one correspondence between $F$ and $G$.

The totality of all shifts $F_x=x+F$ of a given subspace $F$ forms a vector space over $K$, called the quotient space $E/F$, if the operations are defined as follows: \begin{equation} F_x+F_y=F_{x+y};\quad\lambda F_x=F_{\lambda x},\quad\lambda\in K. \end{equation}

Let $M=\{x_\alpha\}_{\alpha\in A}$ be an arbitrary set of vectors in $E$. A linear combination of the vectors $x_\alpha\in E$ is a vector $x$ defined by an expression \begin{equation} x=\sum_{\alpha}\lambda_\alpha x_\alpha,\quad\lambda_\alpha\in K, \end{equation} in which only a finite number of coefficients differ from zero. The set of all linear combinations of vectors of the set $M$ is the smallest subspace containing $M$ and is said to be the linear envelope of the set $M$. A linear combination is said to be trivial if all coefficients $\lambda_\alpha$ are zero. The set $M$ is said to be a linearly independent set if all non-trivial linear combinations of vectors in $M$ are non-zero.

Any linearly independent set is contained in some maximal linearly independent set $M_0$, i.e. in a set which ceases to be linearly independent after any element in $E$ has been added to it.

Each element $x\in E$ may be uniquely represented as a linear combination of elements of a maximal linearly independent set: \begin{equation} x=\sum_{\alpha}\lambda_\alpha x_\alpha,\quad x_\alpha\in M_0. \end{equation}

A maximal linearly independent set is said to be a basis (an algebraic basis) of the vector space for this reason. All bases of a given vector space have the same cardinality, which is known as the dimension of the vector space. If this cardinality is finite, the space is said to be finite-dimensional; otherwise it is known as an infinite-dimensional vector space.

The field $K$ may be considered as a one-dimensional vector space over itself; a basis of this vector space is a single element, which may be any element other than zero. A finite-dimensional vector space with a basis of $n$ elements is known as an $n$-dimensional space.

The theory of convex sets plays an important part in the theory of real and complex vector spaces. A set $M$ in a real vector space is said to be a convex set if for any two points $x$, $y$ in it the segment $tx + (1-t)y$, $t\in [0,1]$, also belongs to $M$.

The theory of linear functionals on vector spaces and the related theory of duality are important parts of the theory of vector spaces. Let $E$ be a vector space over a field $K$. An additive and homogeneous mapping $f\colon E\rightarrow K$, i.e. \begin{equation} f(x+y)=f(x)+f(y),\quad f(\lambda x)=\lambda f(x), \end{equation} is said to be a linear functional on $E$. The set $E^*$ of all linear functionals on $E$ forms a vector space over $K$ with respect to the operations \begin{equation} (f_1+f_2)(x)=f_1(x)+f_2(x),\quad (\lambda f)(x)=\lambda f(x),\quad x\in E,\quad\lambda\in K,\quad f_1,f_2,f\in E^*. \end{equation}

This vector space is said to be the conjugate, or dual, space of $E$. Several geometrical notions are connected with the concept of a conjugate space. Let $D\subset E$ (respectively, $\Gamma\subset E^*$); the set \begin{equation} D^\perp=\left\{f\in E^*\colon f(x)=0\quad \text{for all}\; x\in D\right\}, \end{equation} or \begin{equation} \Gamma_\perp=\left\{x\in E\colon f(x)=0\quad \text{for all}\; f\in\Gamma\right\}, \end{equation} is said to be the annihilator or orthogonal complement of $D$ (respectively, of $\Gamma$); here $D^\perp$ and $\Gamma_\perp$ are subspaces of $E^*$ and $E$, respectively. If $f$ is a non-zero element of $E^*$, $\{ f\}_\perp$ is a maximal proper linear subspace in $E$, which is sometimes called a hypersubspace; a shift of such a subspace is said to be a hyperplane in $E$; thus, any hyperplane has the form \begin{equation} \{x\colon f(x)=\lambda\},\quad f\neq 0,\quad f\in E^*,\quad\lambda\in K. \end{equation}

If $F$ is a subspace of the vector space $E$, there exist natural isomorphisms between $F^*$ and $E^*/F^\perp$ and between $(E/F)^*$ and $F^\perp$.

A subset $\Gamma\subset E^*$ is said to be a total subset over $E$ if its annihilator contains only the zero element, $\Gamma_\perp=\{ 0\}$.

Each linearly independent set $\{ x_\alpha\}_{\alpha\in A}\subset E$ can be brought into correspondence with a conjugate set $\{ f_\alpha\}_{\alpha\in A}\subset E^*$, i.e. with a set such that $f_\alpha(x_\beta)=\delta_{\alpha\beta}$ (the Kronecker symbol) for all $\alpha$, $\beta\in A$. The set of pairs $\{ x_\alpha,f_\alpha\}$ is said to be a biorthogonal system. If the set $\{ x_\alpha\}$ is a basis in $E$, then $\{ f_\alpha\}$ is total over $E$.

An important chapter in the theory of vector spaces is the theory of linear transformations of these spaces. Let $E_1$, $E_2$ be two vector spaces over the same field $K$. Then an additive and homogeneous mapping $T$ of $E_1$ into $E_2$, i.e. \begin{equation} T(x+y)=Tx+Ty;\quad T(\lambda x)=\lambda Tx;\quad x,y\in E_1, \end{equation} is said to be a linear mapping or linear operator, mapping $E_1$ into $E_2$ (or from $E_1$ into $E_2$). A special case of this concept is a linear functional, or a linear operator from $E_1$ into $K$. An example of a linear mapping is the natural mapping from $E$ into the quotient space $E/F$, which establishes a one-to-one correspondence between each element $x\in E$ and the flat set $F_x\in E/F$. The set $\mathcal{L}(E_1,E_2)$ of all linear operators $T\colon E_1\rightarrow E_2$ forms a vector space with respect to the operations \begin{equation} (T_1+T_2)x=T_1x+T_2y;\quad (\lambda T)x=\lambda Tx;\quad x\in E_1;\quad\lambda\in K;\quad T_1,T_2,T\in\mathcal{L}(E_1,E_2). \end{equation}

Two vector spaces $E_1$ and $E_2$ are said to be isomorphic if there exists a linear operator (an isomorphism) which realizes a one-to-one correspondence between their elements. $E_1$ and $E_2$ are isomorphic if and only if their bases have equal cardinalities.

Let $T$ be a linear operator from $E_1$ into $E_2$. The conjugate linear operator, or dual linear operator, of $T$ is the linear operator $T^*$ from $E_1^*$ into $E_2^*$ defined by the equation \begin{equation} (T^*\phi)(X)=\phi(Tx)\quad\text{for all}\;x\in E_1,\phi\in E_2^*. \end{equation}

The relations $T^{*^{-1}}(0)=[T(E_1)]^{\perp}$, $T^*(E_2^*)=[T^{-1}(0)]^{\perp}$ are valid, which imply that $T^*$ is an isomorphism if and only if $T$ is an isomorphism.

The theory of bilinear and multilinear mappings of vector spaces is closely connected with the theory of linear mappings of vector spaces.

Problems of extending linear mappings are an important group of problems in the theory of vector spaces. Let $F$ be a subspace of a vector space $E_1$, let $E_2$ be a linear space over the same field as $E_1$ and let $T_0$ be a linear mapping from $F$ into $E_2$; it is required to find an extension $T$ of $T_0$ which is defined on all of $E_1$ and which is a linear mapping from $E_1$ into $E_2$. Such an extension always exists, but the problem may prove to be unsolvable owing to additional limitations imposed on the functions (which are related to supplementary structures in the vector space, e.g. to the topology or to an order relation). Examples of solutions of extension problems are the Hahn–Banach theorem and theorems on the extension of positive functionals in spaces with a cone.

An important branch of the theory of vector spaces is the theory of operations over a vector space, i.e. methods for constructing new vector spaces from given vector spaces. Examples of such operations are the well-known methods of taking a subspace and forming the quotient space by it. Other important operations include the construction of direct sums, direct products and tensor products of vector spaces.

Let $\{E_\alpha\}_{\alpha\in I}$ be a family of vector spaces over a field $K$. The set $E$ which is the product of $E_\alpha$ can be made into a vector space over $K$ by introducing the operations \begin{equation} (x_\alpha)+(y_\alpha)=(x_\alpha+y_\alpha);\quad\lambda(x_\alpha)=(\lambda x_\alpha);\quad \lambda\in K;\quad x_\alpha,y_\alpha\in E_\alpha,\quad \alpha\in I. \end{equation}

The resulting vector space $E$ is called the direct product of the vector spaces $E_\alpha$, and is written as $\prod_{\alpha\in I}E_\alpha$. The subspace of the vector space $E$ consisting of all sequences $(x_\alpha)$ for each of which the set $\{\alpha\colon x_\alpha\neq 0\}$ is finite, is said to be the direct sum of the vector spaces $E_\alpha$, and is written as $\sum_{\alpha}E_\alpha$ or $\oplus_{\alpha}E_\alpha$. These two notions coincide if the number of terms is finite. In this case one uses the notations: \begin{equation} E_1+\ldots+E_n,\quad E_1\oplus\ldots\oplus E_n\quad\text{or}\quad E_1\times\ldots\times E_n. \end{equation}

Let $E_1$ and $E_2$ be vector spaces over the same field $K$; let $E_1'$, $E_2'$ be total subspaces of the vector spaces $E_1^*$, $E_2^*$, and let $E_1\Box E_2$ be the vector space with the set of all elements of the space $E_1\times E_2$ as its basis. Each element $x\Box y\in E_1\Box E_2$ can be brought into correspondence with a bilinear function $b=T(x,y)$ on $E_1'\times E_2'$ using the formula $b(f,g)=f(x)g(y)$, $f\in E_1'$, $g\in E_2'$. This mapping on the basis vectors $x\Box y\in E_1\Box E_2$ may be extended to a linear mapping $T$ from the vector space $E_1\Box E_2$ into the vector space of all bilinear functionals on $E_1'\times E_2'$. Let $E_0=T^{-1}(0)$. The tensor product of $E_1$ and $E_2$ is the quotient space $E_1\otimes E_2=(E_1\Box E_2)/E_0$; the image of the element $x\Box y$ is written as $x\otimes y$. The vector space $E_1\otimes E_2$ is isomorphic to the vector space of bilinear functionals on $E_1'\times E_2'$ (cf. Tensor product of vector spaces).

The most interesting part of the theory of vector spaces is the theory of finite-dimensional vector spaces. However, the concept of infinite-dimensional vector spaces has also proved fruitful and has interesting applications, especially in the theory of topological vector spaces, i.e. vector spaces equipped with topologies fitted in some manner to its algebraic structure.


References

[1] N. Bourbaki, "Elements of mathematics. Algebra: Algebraic structures. Linear algebra" , 1 , Addison-Wesley (1974) pp. Chapt.1;2 (Translated from French)
[2] D.A. Raikov, "Vector spaces" , Noordhoff (1965) (Translated from Russian)
[3] M.M. Day, "Normed linear spaces" , Springer (1958)
[4] R.E. Edwards, "Functional analysis: theory and applications" , Holt, Rinehart & Winston (1965)
[5] P.R. Halmos, "Finite-dimensional vector spaces" , v. Nostrand (1958)
[6] I.M. Glazman, Yu.I. Lyubich, "Finite-dimensional linear analysis: a systematic presentation in problem form" , M.I.T. (1974) (Translated from Russian)
[7] G. Strang, "Linear algebra and its applications" , Harcourt, Brace, Jovanovich (1988)
[8] B. Noble, J.W. Daniel, "Applied linear algebra" , Prentice-Hall (1977)
[9] W. Noll, "Finite dimensional spaces" , M. Nijhoff (1987) pp. Sect. 2.7
How to Cite This Entry:
Vector space. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Vector_space&oldid=38833
This article was adapted from an original article by M.I. Kadets (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article