Difference between revisions of "Minimal iteration method"
(Importing text file) |
Ulf Rehmann (talk | contribs) m (tex encoded by computer) |
||
Line 1: | Line 1: | ||
− | + | <!-- | |
+ | m0638401.png | ||
+ | $#A+1 = 27 n = 0 | ||
+ | $#C+1 = 27 : ~/encyclopedia/old_files/data/M063/M.0603840 Minimal iteration method | ||
+ | Automatically converted into TeX, above some diagnostics. | ||
+ | Please remove this comment and the {{TEX|auto}} line below, | ||
+ | if TeX found to be correct. | ||
+ | --> | ||
− | + | {{TEX|auto}} | |
+ | {{TEX|done}} | ||
− | + | A method for solving linear algebraic equations $ A x = b $, | |
+ | in which the solution $ x $ | ||
+ | is represented as a linear combination of basis vectors which are orthogonal in some metric connected with the matrix of the system. | ||
− | + | In the case of a symmetric matrix $ A $, | |
+ | the orthogonal system of vectors $ p _ {0} \dots p _ {n-} 1 $ | ||
+ | is constructed using the three-term recurrence formula | ||
− | + | $$ \tag{1 } | |
+ | p _ {k+} 1 = \ | ||
+ | A p _ {k} - \alpha _ {k} p _ {k} - \beta _ {k} p _ {k-} 1 ,\ \ | ||
+ | k = 1 \dots n - 2 , | ||
+ | $$ | ||
− | + | $ p _ {1} = A p _ {0} - \alpha _ {0} p _ {0} $, | |
+ | $ p _ {0} $ | ||
+ | an arbitrary vector, where | ||
− | + | $$ | |
+ | \alpha _ {k} = \ | ||
− | + | \frac{( A p _ {k} , p _ {k} ) }{( p _ {k} , p _ {k} ) } | |
+ | ,\ \ | ||
+ | k = 0 \dots n - 2 , | ||
+ | $$ | ||
− | If the orthogonalization algorithm is degenerate, that is, if | + | $$ |
+ | \beta _ {k} = | ||
+ | \frac{( p _ {k} , p _ {k} ) }{( | ||
+ | p _ {k-} 1 , p _ {k-} 1 ) } | ||
+ | ,\ k = 1 \dots n - 2 . | ||
+ | $$ | ||
+ | |||
+ | The solution of the system $ A x = b $ | ||
+ | is found by the formula $ x = \sum _ {k=} 0 ^ {n-} 1 c _ {k} p _ {k} $, | ||
+ | and the coefficients $ c _ {k} $ | ||
+ | are given as the solutions of the system | ||
+ | |||
+ | $$ \tag{2 } | ||
+ | \left . \begin{array}{c} | ||
+ | |||
+ | c _ {i-} 1 + \alpha _ {i} c _ {i} + \beta _ {i+} 1 c _ {i+} 1 | ||
+ | = \ | ||
+ | |||
+ | \frac{( b , p _ {i} ) }{( p _ {i} , p _ {i} ) } | ||
+ | ,\ \ | ||
+ | i = 1 \dots n - 2 , | ||
+ | \\ | ||
+ | |||
+ | \alpha _ {0} c _ {0} + \beta _ {1} c _ {1} = \ | ||
+ | |||
+ | \frac{( b , p _ {0} ) }{( p _ {0} , p _ {0} ) } | ||
+ | , | ||
+ | \\ | ||
+ | |||
+ | c _ {n-} 2 + \alpha _ {n-} 1 c _ {n-} 1 = \ | ||
+ | |||
+ | \frac{( b , p _ {n-} 1 ) }{( p _ {n-} 1 , p _ {n-} 1 ) } | ||
+ | . | ||
+ | |||
+ | \end{array} | ||
+ | \right \} | ||
+ | $$ | ||
+ | |||
+ | If the orthogonalization algorithm is degenerate, that is, if $ p _ {r} = 0 $ | ||
+ | for $ r < n $, | ||
+ | one has to choose a new initial vector $ p _ {0} ^ {(} 1) $, | ||
+ | orthogonal to $ p _ {0} \dots p _ {r-} 1 $ | ||
+ | and one has to complete the system of basis vectors to a complete system. | ||
In the case of a non-symmetric matrix a bi-orthogonal algorithm is used. | In the case of a non-symmetric matrix a bi-orthogonal algorithm is used. | ||
− | If | + | If $ A $ |
+ | is symmetric and positive definite, then constructing an $ A $- | ||
+ | orthogonal system $ p _ {0} \dots p _ {n-} 1 $ | ||
+ | by formula (1) with | ||
+ | |||
+ | $$ | ||
+ | \alpha _ {k} = \ | ||
+ | |||
+ | \frac{( A {p _ {k} } , A {p _ {k} } ) }{( A {p _ {k} } , p _ {k} ) } | ||
+ | ,\ \ | ||
+ | \beta _ {k} = \ | ||
+ | |||
+ | \frac{( A {p _ {k} } , p _ {k} ) }{( A {p _ {k-} 1 } , p _ {k-} 1 ) } | ||
− | + | $$ | |
− | enables one to avoid solving the auxiliary system (2) and gives an explicit expression for the coefficients | + | enables one to avoid solving the auxiliary system (2) and gives an explicit expression for the coefficients $ c _ {k} $: |
+ | $ c _ {k} = ( b , p _ {k} ) / ( A p _ {k} , p _ {k} ) $. | ||
+ | Here, to the method of $ A $- | ||
+ | minimal iteration one can add the iteration | ||
− | + | $$ | |
+ | x _ {k+} 1 = x _ {k} + c _ {k+} 1 p _ {k+} 1 ,\ \ | ||
+ | k = 0 \dots n - 2 ,\ \ | ||
+ | x _ {0} = c _ {0} p _ {0} , | ||
+ | $$ | ||
− | where | + | where $ x = x _ {n-} 1 $. |
+ | This modification of the method does not require a repeated use of all the vectors $ p _ {0} \dots p _ {k-} 1 $. | ||
+ | A minimal iteration method is used also for the solution of the complete eigen value problem and for finding the inverse matrix. | ||
====References==== | ====References==== | ||
<table><TR><TD valign="top">[1]</TD> <TD valign="top"> C. Lanczos, "An iteration method for the solution of the eigenvalue problem of linear differential and integral operators" ''Res. Nat. Bur. Stand.'' , '''45''' : 4 (1950) pp. 255–288</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top"> D.K. Faddeev, V.N. Faddeeva, "Computational methods of linear algebra" , Freeman (1963) (Translated from Russian)</TD></TR></table> | <table><TR><TD valign="top">[1]</TD> <TD valign="top"> C. Lanczos, "An iteration method for the solution of the eigenvalue problem of linear differential and integral operators" ''Res. Nat. Bur. Stand.'' , '''45''' : 4 (1950) pp. 255–288</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top"> D.K. Faddeev, V.N. Faddeeva, "Computational methods of linear algebra" , Freeman (1963) (Translated from Russian)</TD></TR></table> |
Revision as of 08:00, 6 June 2020
A method for solving linear algebraic equations $ A x = b $,
in which the solution $ x $
is represented as a linear combination of basis vectors which are orthogonal in some metric connected with the matrix of the system.
In the case of a symmetric matrix $ A $, the orthogonal system of vectors $ p _ {0} \dots p _ {n-} 1 $ is constructed using the three-term recurrence formula
$$ \tag{1 } p _ {k+} 1 = \ A p _ {k} - \alpha _ {k} p _ {k} - \beta _ {k} p _ {k-} 1 ,\ \ k = 1 \dots n - 2 , $$
$ p _ {1} = A p _ {0} - \alpha _ {0} p _ {0} $, $ p _ {0} $ an arbitrary vector, where
$$ \alpha _ {k} = \ \frac{( A p _ {k} , p _ {k} ) }{( p _ {k} , p _ {k} ) } ,\ \ k = 0 \dots n - 2 , $$
$$ \beta _ {k} = \frac{( p _ {k} , p _ {k} ) }{( p _ {k-} 1 , p _ {k-} 1 ) } ,\ k = 1 \dots n - 2 . $$
The solution of the system $ A x = b $ is found by the formula $ x = \sum _ {k=} 0 ^ {n-} 1 c _ {k} p _ {k} $, and the coefficients $ c _ {k} $ are given as the solutions of the system
$$ \tag{2 } \left . \begin{array}{c} c _ {i-} 1 + \alpha _ {i} c _ {i} + \beta _ {i+} 1 c _ {i+} 1 = \ \frac{( b , p _ {i} ) }{( p _ {i} , p _ {i} ) } ,\ \ i = 1 \dots n - 2 , \\ \alpha _ {0} c _ {0} + \beta _ {1} c _ {1} = \ \frac{( b , p _ {0} ) }{( p _ {0} , p _ {0} ) } , \\ c _ {n-} 2 + \alpha _ {n-} 1 c _ {n-} 1 = \ \frac{( b , p _ {n-} 1 ) }{( p _ {n-} 1 , p _ {n-} 1 ) } . \end{array} \right \} $$
If the orthogonalization algorithm is degenerate, that is, if $ p _ {r} = 0 $ for $ r < n $, one has to choose a new initial vector $ p _ {0} ^ {(} 1) $, orthogonal to $ p _ {0} \dots p _ {r-} 1 $ and one has to complete the system of basis vectors to a complete system.
In the case of a non-symmetric matrix a bi-orthogonal algorithm is used.
If $ A $ is symmetric and positive definite, then constructing an $ A $- orthogonal system $ p _ {0} \dots p _ {n-} 1 $ by formula (1) with
$$ \alpha _ {k} = \ \frac{( A {p _ {k} } , A {p _ {k} } ) }{( A {p _ {k} } , p _ {k} ) } ,\ \ \beta _ {k} = \ \frac{( A {p _ {k} } , p _ {k} ) }{( A {p _ {k-} 1 } , p _ {k-} 1 ) } $$
enables one to avoid solving the auxiliary system (2) and gives an explicit expression for the coefficients $ c _ {k} $: $ c _ {k} = ( b , p _ {k} ) / ( A p _ {k} , p _ {k} ) $. Here, to the method of $ A $- minimal iteration one can add the iteration
$$ x _ {k+} 1 = x _ {k} + c _ {k+} 1 p _ {k+} 1 ,\ \ k = 0 \dots n - 2 ,\ \ x _ {0} = c _ {0} p _ {0} , $$
where $ x = x _ {n-} 1 $. This modification of the method does not require a repeated use of all the vectors $ p _ {0} \dots p _ {k-} 1 $. A minimal iteration method is used also for the solution of the complete eigen value problem and for finding the inverse matrix.
References
[1] | C. Lanczos, "An iteration method for the solution of the eigenvalue problem of linear differential and integral operators" Res. Nat. Bur. Stand. , 45 : 4 (1950) pp. 255–288 |
[2] | D.K. Faddeev, V.N. Faddeeva, "Computational methods of linear algebra" , Freeman (1963) (Translated from Russian) |
Minimal iteration method. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Minimal_iteration_method&oldid=16741