Namespaces
Variants
Actions

Difference between revisions of "Orthogonalization method"

From Encyclopedia of Mathematics
Jump to: navigation, search
m
m (tex done)
 
Line 1: Line 1:
{{TEX|want}}
+
{{TEX|done}}
 
+
A method for solving a system of linear algebraic equations <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o070/o070430/o0704301.png" /> with a non-singular matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o070/o070430/o0704302.png" /> based on the Gram–Schmidt method of [[Orthogonalization|orthogonalization]] of a vector system.
+
A method for solving a system of linear algebraic equations $  Ax = b $
 +
with a non-singular matrix $  A $
 +
based on the Gram–Schmidt method of [[Orthogonalization|orthogonalization]] of a vector system.
  
 
If
 
If
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o070/o070430/o0704303.png" /></td> </tr></table>
+
$$
 +
A \  = \  \| a _ {ij} \| ; \ \
 +
x \  = \  (x _ {1} \dots x _ {n} )  ^ {T} ;
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o070/o070430/o0704304.png" /></td> </tr></table>
+
$$
 +
b \  = \  (b _ {1} \dots b _ {n} )  ^ {T} ;
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o070/o070430/o0704305.png" /></td> </tr></table>
+
$$
 +
a _ {i} \  = \  (a _ {i1} \dots a _ {in} ,\  - b _ {i} ),\ \  i = 1 \dots n;
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o070/o070430/o0704306.png" /></td> </tr></table>
+
$$
 +
y \  = \  (x _ {1} \dots x _ {n} ,\  1)  ^ {T} ,
 +
$$
  
 
then the initial system of equations can be written in the form
 
then the initial system of equations can be written in the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o070/o070430/o0704307.png" /></td> </tr></table>
+
$$
 +
(a _ {i} ,\  y) \  = \  0,\ \
 +
i= 1 \dots n.
 +
$$
 +
 
 +
This means that the solution of the system is equivalent to the determination of a vector  $  y $
 +
which has 1 as the last component and is orthogonal to all vectors  $  a _ {i} $,
 +
$  i = 1 \dots n $.
 +
For this purpose, an orthogonalization process is used for the vector system  $  a _ {1} \dots a _ {n} ,\  a _ {n+1} $,
 +
where  $  a _ {n+1} = (0 \dots 0,\  1) $,
 +
which is linearly independent by virtue of the non-singularity of the matrix  $  A $.
 +
This process entails the construction of an orthonormal vector system  $  q _ {1} \dots q _ {n+1} $,
 +
with respect to the scalar product  $  (x,\  y) = x  ^ {T} y $,
 +
using the recurrence relations
 +
 
 +
$$ \tag{* }
 +
\left .
 +
\begin{array}{c}
 +
 
 +
v _ {1} \  = \  a _ {1} ,\ \  q _ {1} \  = \ 
 +
\frac{v _ {1} }{\sqrt {(v _ {1} ,\  v _ {1} ) } }
 +
,
 +
\\
 +
 
 +
v _ {k} \  = \  a _ {k} + \sum _ { i=1 } ^ { k-1 }  c _ {i} q _ {i} ,\ \
 +
c _ {i} \  = \  -(a _ {k} ,\  q _ {i} ),
 +
\\
  
This means that the solution of the system is equivalent to the determination of a vector <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o070/o070430/o0704308.png" /> which has 1 as the last component and is orthogonal to all vectors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o070/o070430/o0704309.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o070/o070430/o07043010.png" />. For this purpose, an orthogonalization process is used for the vector system <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o070/o070430/o07043011.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o070/o070430/o07043012.png" />, which is linearly independent by virtue of the non-singularity of the matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o070/o070430/o07043013.png" />. This process entails the construction of an orthonormal vector system <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o070/o070430/o07043014.png" />, with respect to the scalar product <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o070/o070430/o07043015.png" />, using the recurrence relations
+
q _ {k} \  =
 +
\frac{v _ {k} }{\sqrt {(v _ {k} ,\  v _ {k} ) } }
 +
.  
 +
\end{array}
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o070/o070430/o07043016.png" /></td> <td valign="top" style="width:5%;text-align:right;">(*)</td></tr></table>
+
\right \}
 +
$$
  
The coefficients <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o070/o070430/o07043017.png" /> are obtained from the condition of orthogonality of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o070/o070430/o07043018.png" /> to the vectors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o070/o070430/o07043019.png" />. The vectors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o070/o070430/o07043020.png" /> are linearly expressed in terms of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o070/o070430/o07043021.png" />, so the vector <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o070/o070430/o07043022.png" /> is orthogonal to all vectors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o070/o070430/o07043023.png" />. The non-singularity of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o070/o070430/o07043024.png" /> ensures moreover that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o070/o070430/o07043025.png" />. Thus,
+
The coefficients $  c _ {i} $
 +
are obtained from the condition of orthogonality of $  v _ {k} $
 +
to the vectors $  q _ {1} \dots q _ {k-1} $.  
 +
The vectors $  a _ {1} \dots a _ {n} $
 +
are linearly expressed in terms of $  q _ {1} \dots q _ {n} $,  
 +
so the vector $  q _ {n+1} = (z _ {1} \dots z _ {n+1} ) $
 +
is orthogonal to all vectors $  a _ {1} \dots a _ {n} $.  
 +
The non-singularity of $  A $
 +
ensures moreover that $  z _ {n+1} \neq 0 $.  
 +
Thus,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o070/o070430/o07043026.png" /></td> </tr></table>
+
$$
 +
\left (
 +
\frac{z _ {1} }{z _ {n+1} }
 +
\dots
 +
\frac{z _ {n} }{z _ {n+1} }
 +
\right )
 +
$$
  
 
is the required solution of the system.
 
is the required solution of the system.
  
This scheme of the orthogonalization method fits well into the general scheme of direct methods for solving a system: The relations (*) are equivalent to the transformation of the matrix of the system to the matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o070/o070430/o07043027.png" />, where
+
This scheme of the orthogonalization method fits well into the general scheme of direct methods for solving a system: The relations (*) are equivalent to the transformation of the matrix of the system to the matrix $  L _ {n} \dots L _ {1} A $,  
 +
where
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o070/o070430/o07043028.png" /></td> </tr></table>
+
$$
 +
L _ {k} \  = \  \left \|
 +
 +
\begin{array}{lclcc}
 +
1  &{}  &{}  &{}  &{}  \\
 +
{}  &\cdot  &{}  &{}  &{}  \\
 +
c _ {1}  &\dots  &c _ {k}  &{}  &{}  \\
 +
{}  &{}  &{}  &\cdot  &{}  \\
 +
{}  &{}  &{}  &{}  & 1  \\
 +
\end{array}
 +
\right \| ,
 +
$$
  
and thereby realizes a factorization of the matrix of the system in the form <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o070/o070430/o07043029.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o070/o070430/o07043030.png" /> is a triangular and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o070/o070430/o07043031.png" /> a unitary matrix.
+
and thereby realizes a factorization of the matrix of the system in the form $  A = LQ $,  
 +
where $  L $
 +
is a triangular and $  Q $
 +
a unitary matrix.
  
The process of factorizing a matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o070/o070430/o07043032.png" /> by means of the orthogonalization method is stable for rounding errors. If in (*), when the operation of taking the scalar product of vectors is carried out, a procedure of accumulation with double precision is used, then for the factorization of the matrix by means of the orthogonalization method one of the best estimates of accuracy in the class of direct methods is obtained. In this case, however, the property of orthogonality of the vectors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o070/o070430/o07043033.png" />, i.e. the property that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/o/o070/o070430/o07043034.png" /> is unitary, is unstable in relation to rounding errors. So the solution of the system obtained from the recurrence relations (*) can contain a large error. To eliminate this deficiency, various methods of re-orthogonalization are used (see [[#References|[1]]], [[#References|[2]]]).
+
The process of factorizing a matrix $  A $
 +
by means of the orthogonalization method is stable for rounding errors. If in (*), when the operation of taking the scalar product of vectors is carried out, a procedure of accumulation with double precision is used, then for the factorization of the matrix by means of the orthogonalization method one of the best estimates of accuracy in the class of direct methods is obtained. In this case, however, the property of orthogonality of the vectors $  q _ {1} \dots q _ {n} $,  
 +
i.e. the property that $  Q $
 +
is unitary, is unstable in relation to rounding errors. So the solution of the system obtained from the recurrence relations (*) can contain a large error. To eliminate this deficiency, various methods of re-orthogonalization are used (see [[#References|[1]]], [[#References|[2]]]).
  
 
The speed of operation of the orthogonalization method is inferior to that of many direct methods.
 
The speed of operation of the orthogonalization method is inferior to that of many direct methods.

Latest revision as of 11:52, 10 February 2020


A method for solving a system of linear algebraic equations $ Ax = b $ with a non-singular matrix $ A $ based on the Gram–Schmidt method of orthogonalization of a vector system.

If

$$ A \ = \ \| a _ {ij} \| ; \ \ x \ = \ (x _ {1} \dots x _ {n} ) ^ {T} ; $$

$$ b \ = \ (b _ {1} \dots b _ {n} ) ^ {T} ; $$

$$ a _ {i} \ = \ (a _ {i1} \dots a _ {in} ,\ - b _ {i} ),\ \ i = 1 \dots n; $$

$$ y \ = \ (x _ {1} \dots x _ {n} ,\ 1) ^ {T} , $$

then the initial system of equations can be written in the form

$$ (a _ {i} ,\ y) \ = \ 0,\ \ i= 1 \dots n. $$

This means that the solution of the system is equivalent to the determination of a vector $ y $ which has 1 as the last component and is orthogonal to all vectors $ a _ {i} $, $ i = 1 \dots n $. For this purpose, an orthogonalization process is used for the vector system $ a _ {1} \dots a _ {n} ,\ a _ {n+1} $, where $ a _ {n+1} = (0 \dots 0,\ 1) $, which is linearly independent by virtue of the non-singularity of the matrix $ A $. This process entails the construction of an orthonormal vector system $ q _ {1} \dots q _ {n+1} $, with respect to the scalar product $ (x,\ y) = x ^ {T} y $, using the recurrence relations

$$ \tag{* } \left . \begin{array}{c} v _ {1} \ = \ a _ {1} ,\ \ q _ {1} \ = \ \frac{v _ {1} }{\sqrt {(v _ {1} ,\ v _ {1} ) } } , \\ v _ {k} \ = \ a _ {k} + \sum _ { i=1 } ^ { k-1 } c _ {i} q _ {i} ,\ \ c _ {i} \ = \ -(a _ {k} ,\ q _ {i} ), \\ q _ {k} \ = \ \frac{v _ {k} }{\sqrt {(v _ {k} ,\ v _ {k} ) } } . \end{array} \right \} $$

The coefficients $ c _ {i} $ are obtained from the condition of orthogonality of $ v _ {k} $ to the vectors $ q _ {1} \dots q _ {k-1} $. The vectors $ a _ {1} \dots a _ {n} $ are linearly expressed in terms of $ q _ {1} \dots q _ {n} $, so the vector $ q _ {n+1} = (z _ {1} \dots z _ {n+1} ) $ is orthogonal to all vectors $ a _ {1} \dots a _ {n} $. The non-singularity of $ A $ ensures moreover that $ z _ {n+1} \neq 0 $. Thus,

$$ \left ( \frac{z _ {1} }{z _ {n+1} } \dots \frac{z _ {n} }{z _ {n+1} } \right ) $$

is the required solution of the system.

This scheme of the orthogonalization method fits well into the general scheme of direct methods for solving a system: The relations (*) are equivalent to the transformation of the matrix of the system to the matrix $ L _ {n} \dots L _ {1} A $, where

$$ L _ {k} \ = \ \left \| \ \begin{array}{lclcc} 1 &{} &{} &{} &{} \\ {} &\cdot &{} &{} &{} \\ c _ {1} &\dots &c _ {k} &{} &{} \\ {} &{} &{} &\cdot &{} \\ {} &{} &{} &{} & 1 \\ \end{array} \right \| , $$

and thereby realizes a factorization of the matrix of the system in the form $ A = LQ $, where $ L $ is a triangular and $ Q $ a unitary matrix.

The process of factorizing a matrix $ A $ by means of the orthogonalization method is stable for rounding errors. If in (*), when the operation of taking the scalar product of vectors is carried out, a procedure of accumulation with double precision is used, then for the factorization of the matrix by means of the orthogonalization method one of the best estimates of accuracy in the class of direct methods is obtained. In this case, however, the property of orthogonality of the vectors $ q _ {1} \dots q _ {n} $, i.e. the property that $ Q $ is unitary, is unstable in relation to rounding errors. So the solution of the system obtained from the recurrence relations (*) can contain a large error. To eliminate this deficiency, various methods of re-orthogonalization are used (see [1], [2]).

The speed of operation of the orthogonalization method is inferior to that of many direct methods.

References

[1] V.V. Voevodin, "Computational foundations of linear algebra" , Moscow (1977) (In Russian)
[2] N.S. Bakhvalov, "Numerical methods: analysis, algebra, ordinary differential equations" , MIR (1977) (Translated from Russian)
How to Cite This Entry:
Orthogonalization method. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Orthogonalization_method&oldid=44403
This article was adapted from an original article by G.D. Kim (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article