Namespaces
Variants
Actions

Difference between revisions of "Square-root method"

From Encyclopedia of Mathematics
Jump to: navigation, search
m (tex encoded by computer)
m (fix tex)
 
Line 35: Line 35:
  
 
\begin{array}{c}
 
\begin{array}{c}
d _ {ii}  =  \mathop{\rm sign} \left ( a _ {ii} - \sum _ { k= } 1 ^ { i- }  1 | s _ {ki} |  ^ {2} d _ {kk} \right ) ,  \\
+
d _ {ii}  =  \mathop{\rm sign} \left ( a _ {ii} - \sum _ { k=1 } ^ { i-1 }  | s _ {ki} |  ^ {2} d _ {kk} \right ) ,  \\
s _ {ii}  =  \left | a _ {ii} - \sum _ { k= } 1 ^ { i- } 1 | s _ {ki} |  ^ {2} d _ {kk} \right |  ^ {1/2} ,  \\
+
s _ {ii}  =  \left | a _ {ii} - \sum _ { k=1 } ^ { i-1 } | s _ {ki} |  ^ {2} d _ {kk} \right |  ^ {1/2} ,  \\
 
s _ {ij}  =   
 
s _ {ij}  =   
\frac{a _ {ij} - \sum _ { k= } 1 ^ { i- } 1 \overline{ {s _ {ki} }}\; s _ {kj} d _ {kk} }{s _ {ii} d _ {ii} }
+
\frac{a _ {ij} - \sum _ { k=1 } ^ { i-1 } \overline{ {s _ {ki} }}\; s _ {kj} d _ {kk} }{s _ {ii} d _ {ii} }
 
  ,\  j> i.  \\
 
  ,\  j> i.  \\
 
\end{array}
 
\end{array}

Latest revision as of 18:15, 25 June 2020


A method for solving a system of linear algebraic equations $ Ax= b $ with a non-degenerate Hermitian matrix $ A $. Among the direct methods it is most efficient when implemented on a computer.

The computing scheme of the method in the general case is based on a factorization of $ A $ in the form

$$ \tag{1 } A = S ^ {*} DS, $$

where $ S $ is an upper-triangular matrix with real positive diagonal entries and $ D $ is a diagonal matrix with diagonal entries $ 1 $ or $ - 1 $. From (1) one immediately obtains the recurrence relations for the calculation of the elements $ s _ {ij} $ and $ d _ {ii} $ of the matrices $ S $ and $ D $:

$$ \tag{2 } \left . \begin{array}{c} d _ {ii} = \mathop{\rm sign} \left ( a _ {ii} - \sum _ { k=1 } ^ { i-1 } | s _ {ki} | ^ {2} d _ {kk} \right ) , \\ s _ {ii} = \left | a _ {ii} - \sum _ { k=1 } ^ { i-1 } | s _ {ki} | ^ {2} d _ {kk} \right | ^ {1/2} , \\ s _ {ij} = \frac{a _ {ij} - \sum _ { k=1 } ^ { i-1 } \overline{ {s _ {ki} }}\; s _ {kj} d _ {kk} }{s _ {ii} d _ {ii} } ,\ j> i. \\ \end{array} \right \} $$

After the factorization (1) is carried out, the solution of the original system is reduced to the successive solution of the two systems $ S ^ {*} Dy= b $ and $ Sx= y $ with triangular matrices. In this aspect, the square-root method is the same as the inverse of most direct methods for solving a system (see Gauss method).

In the real case, when $ A $ is a symmetric matrix, the scheme (2) corresponds to the factorization $ A = S ^ {T} DS $ with $ S $ a real matrix, and is somewhat simplified. A substantial simplification of the scheme (2) occurs when $ A $ is a positive-definite matrix. In this case, $ D= E $ and $ A= S ^ {*} S $.

Variants of the square-root method are known for non-positive definite matrices, based on a factorization of the form $ A = S ^ {*} S $. There are recurrence relations similar to (2) for the calculation of the entries of $ S $. However, this factorization is not efficient for computer implementation if $ A $ is a real matrix, since in the calculation of the matrix $ S $ an output in complex arithmetic may be involved.

The following are important characteristics of the square-root method.

1) Among the direct methods for solving systems based on factorization of a matrix, the square-root method has the highest speed (twice that of the Gauss method).

2) For the factorization of a matrix by the square-root method it suffices to provide compact information concerning the matrix, in the form of the entries of a triangular half of it. Furthermore, the scheme (2) allows one to store the matrix $ S $ in the computer memory instead of the original information about $ A $. This, in essence, increases the orders of the systems that can be solved.

The computation scheme of the square-root method admits distributed processing of the initial matrix by sets of successive rows. Here the use of an external memory enables one to solve systems of high order.

3) The square-root method preserves the band structure of the matrix, that is, the matrix $ S $ has the same shape as the upper half of the initial matrix.

4) The square-root method is especially efficient for systems with positive-definite matrices. In this case the entries of the matrix do not increase in the calculation process. This property ensures stability of the calculating process with regard to round-off errors. The upper bound on the accuracy of the calculated solution in this case is minimal in the class of direct methods. A still further decrease of the general level of errors is obtained by the possibility of calculating the scalar products of vectors in the scheme (2) with double precision.

The absence of increase in the elements in the calculating process provides a convenient computer implementation of the square-root method when working with fixed-point arithmetic.

5) The decomposition (1) can be used for the calculation of the determinant and inverse of a matrix.

References

[1] V.V. Voevodin, "Numerical methods of algebra" , Moscow (1966) (In Russian)
[2] N.S. Bakhvalov, "Numerical methods: analysis, algebra, ordinary differential equations" , MIR (1977) (Translated from Russian)
[3] J.H. Wilkinson, "The algebraic eigenvalue problem" , Oxford Univ. Press (1969)
[4] V.V. Voevodin, "Computational foundations of linear algebra" , Moscow (1977) (In Russian)
How to Cite This Entry:
Square-root method. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Square-root_method&oldid=49817
This article was adapted from an original article by G.D. Kim (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article