# Orthogonalization method

A method for solving a system of linear algebraic equations $ Ax = b $
with a non-singular matrix $ A $
based on the Gram–Schmidt method of orthogonalization of a vector system.

If

$$ A \ = \ \| a _ {ij} \| ; \ \ x \ = \ (x _ {1}, \dots, x _ {n} ) ^ {T} ; $$

$$ b \ = \ (b _ {1}, \dots, b _ {n} ) ^ {T} ; $$

$$ a _ {i} \ = \ (a _ {i1}, \dots, a _ {in} ,\ - b _ {i} ),\ \ i = 1, \dots, n; $$

$$ y \ = \ (x _ {1}, \dots, x _ {n} ,\ 1) ^ {T} , $$

then the initial system of equations can be written in the form

$$ (a _ {i} ,\ y) \ = \ 0,\ \ i= 1, \dots, n. $$

This means that the solution of the system is equivalent to the determination of a vector $ y $ which has 1 as the last component and is orthogonal to all vectors $ a _ {i} $, $ i = 1, \dots, n $. For this purpose, an orthogonalization process is used for the vector system $ a _ {1}, \dots, a _ {n} , a _ {n+1} $, where $ a _ {n+1} = (0, \dots, 0,\ 1) $, which is linearly independent by virtue of the non-singularity of the matrix $ A $. This process entails the construction of an orthonormal vector system $ q _ {1}, \dots, q _ {n+1} $, with respect to the scalar product $ (x, y) = x ^ {T} y $, using the recurrence relations

$$ \tag{* } \left . \begin{array}{c} v _ {1} \ = \ a _ {1} ,\ \ q _ {1} \ = \ \frac{v _ {1} }{\sqrt {(v _ {1} ,\ v _ {1} ) } } , \\ v _ {k} \ = \ a _ {k} + \sum _ { i=1 } ^ { k-1 } c _ {i} q _ {i} ,\ \ c _ {i} \ = \ -(a _ {k} ,\ q _ {i} ), \\ q _ {k} \ = \ \frac{v _ {k} }{\sqrt {(v _ {k} ,\ v _ {k} ) } } . \end{array} \right \} $$

The coefficients $ c _ {i} $ are obtained from the condition of orthogonality of $ v _ {k} $ to the vectors $ q _ {1}, \dots, q _ {k-1} $. The vectors $ a _ {1}, \dots, a _ {n} $ are linearly expressed in terms of $ q _ {1}, \dots, q _ {n} $, so the vector $ q _ {n+1} = (z _ {1}, \dots, z _ {n+1} ) $ is orthogonal to all vectors $ a _ {1}, \dots, a _ {n} $. The non-singularity of $ A $ ensures moreover that $ z _ {n+1} \neq 0 $. Thus,

$$ \left ( \frac{z _ {1} }{z _ {n+1} }, \dots, \frac{z _ {n} }{z _ {n+1} } \right ) $$

is the required solution of the system.

This scheme of the orthogonalization method fits well into the general scheme of direct methods for solving a system: The relations (*) are equivalent to the transformation of the matrix of the system to the matrix $ L _ {n} \dots L _ {1} A $, where

$$ L _ {k} \ = \ \left \| \ \begin{array}{lclcc} 1 &{} &{} &{} &{} \\ {} &\cdot &{} &{} &{} \\ c _ {1} &\dots &c _ {k} &{} &{} \\ {} &{} &{} &\cdot &{} \\ {} &{} &{} &{} & 1 \\ \end{array} \right \| , $$

and thereby realizes a factorization of the matrix of the system in the form $ A = LQ $, where $ L $ is a triangular and $ Q $ a unitary matrix.

The process of factorizing a matrix $ A $ by means of the orthogonalization method is stable for rounding errors. If in (*), when the operation of taking the scalar product of vectors is carried out, a procedure of accumulation with double precision is used, then for the factorization of the matrix by means of the orthogonalization method one of the best estimates of accuracy in the class of direct methods is obtained. In this case, however, the property of orthogonality of the vectors $ q _ {1}, \dots, q _ {n} $, i.e. the property that $ Q $ is unitary, is unstable in relation to rounding errors. So the solution of the system obtained from the recurrence relations (*) can contain a large error. To eliminate this deficiency, various methods of re-orthogonalization are used (see [1], [2]).

The speed of operation of the orthogonalization method is inferior to that of many direct methods.

#### References

[1] | V.V. Voevodin, "Computational foundations of linear algebra" , Moscow (1977) (In Russian) |

[2] | N.S. Bakhvalov, "Numerical methods: analysis, algebra, ordinary differential equations" , MIR (1977) (Translated from Russian) |

**How to Cite This Entry:**

Orthogonalization method.

*Encyclopedia of Mathematics.*URL: http://encyclopediaofmath.org/index.php?title=Orthogonalization_method&oldid=52143