Namespaces
Variants
Actions

Difference between revisions of "Minimal iteration method"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
(latex details)
 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
A method for solving linear algebraic equations <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063840/m0638401.png" />, in which the solution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063840/m0638402.png" /> is represented as a linear combination of basis vectors which are orthogonal in some metric connected with the matrix of the system.
+
<!--
 +
m0638401.png
 +
$#A+1 = 27 n = 0
 +
$#C+1 = 27 : ~/encyclopedia/old_files/data/M063/M.0603840 Minimal iteration method
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
  
In the case of a symmetric matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063840/m0638403.png" />, the orthogonal system of vectors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063840/m0638404.png" /> is constructed using the three-term recurrence formula
+
{{TEX|auto}}
 +
{{TEX|done}}
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063840/m0638405.png" /></td> <td valign="top" style="width:5%;text-align:right;">(1)</td></tr></table>
+
A method for solving linear algebraic equations  $  A x = b $,
 +
in which the solution  $  x $
 +
is represented as a linear combination of basis vectors which are orthogonal in some metric connected with the matrix of the system.
  
<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063840/m0638406.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063840/m0638407.png" /> an arbitrary vector, where
+
In the case of a symmetric matrix  $  A $,  
 +
the orthogonal system of vectors  $  p _ {0} \dots p _ {n-1} $
 +
is constructed using the three-term recurrence formula
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063840/m0638408.png" /></td> </tr></table>
+
$$ \tag{1 }
 +
p _ {k+1}  = A p _ {k} - \alpha _ {k} p _ {k} - \beta _ {k} p _ {k-1} ,\ \
 +
k = 1 \dots n - 2 ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063840/m0638409.png" /></td> </tr></table>
+
$  p _ {1} = A p _ {0} - \alpha _ {0} p _ {0} $,
 +
$  p _ {0} $
 +
an arbitrary vector, where
  
The solution of the system <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063840/m06384010.png" /> is found by the formula <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063840/m06384011.png" />, and the coefficients <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063840/m06384012.png" /> are given as the solutions of the system
+
$$
 +
\alpha _ {k}  = \
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063840/m06384013.png" /></td> <td valign="top" style="width:5%;text-align:right;">(2)</td></tr></table>
+
\frac{( A p _ {k} , p _ {k} ) }{( p _ {k} , p _ {k} ) }
 +
,\ \
 +
k = 0 \dots n - 2 ,
 +
$$
  
If the orthogonalization algorithm is degenerate, that is, if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063840/m06384014.png" /> for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063840/m06384015.png" />, one has to choose a new initial vector <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063840/m06384016.png" />, orthogonal to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063840/m06384017.png" /> and one has to complete the system of basis vectors to a complete system.
+
$$
 +
\beta _ {k}  = 
 +
\frac{( p _ {k} , p _ {k} ) }{(
 +
p _ {k-1} , p _ {k-1} ) }
 +
,\  k = 1 \dots n - 2 .
 +
$$
 +
 
 +
The solution of the system  $  A x = b $
 +
is found by the formula  $  x = \sum_{k=0}^ {n-1} c _ {k} p _ {k} $,
 +
and the coefficients  $  c _ {k} $
 +
are given as the solutions of the system
 +
 
 +
$$ \tag{2 }
 +
\left . \begin{array}{c}
 +
 
 +
c _ {i-1} + \alpha _ {i} c _ {i} + \beta _ {i+1} c _ {i+1}
 +
= \
 +
 
 +
\frac{( b , p _ {i} ) }{( p _ {i} , p _ {i} ) }
 +
,\ \
 +
i = 1 \dots n - 2 ,
 +
\\
 +
 
 +
\alpha _ {0} c _ {0} + \beta _ {1} c _ {1}  = \
 +
 
 +
\frac{( b , p _ {0} ) }{( p _ {0} , p _ {0} ) }
 +
,
 +
\\
 +
 
 +
c _ {n-2} + \alpha _ {n-1} c _ {n-1}  = \
 +
 
 +
\frac{( b , p _ {n-1} ) }{( p _ {n-1} , p _ {n-1} ) }
 +
.
 +
 +
\end{array}
 +
\right \}
 +
$$
 +
 
 +
If the orthogonalization algorithm is degenerate, that is, if $  p _ {r} = 0 $
 +
for $  r < n $,  
 +
one has to choose a new initial vector $  p _ {0}  ^ {(1)} $,  
 +
orthogonal to $  p _ {0} \dots p _ {r-1} $
 +
and one has to complete the system of basis vectors to a complete system.
  
 
In the case of a non-symmetric matrix a bi-orthogonal algorithm is used.
 
In the case of a non-symmetric matrix a bi-orthogonal algorithm is used.
  
If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063840/m06384018.png" /> is symmetric and positive definite, then constructing an <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063840/m06384019.png" />-orthogonal system <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063840/m06384020.png" /> by formula (1) with
+
If $  A $
 +
is symmetric and positive definite, then constructing an $  A $-
 +
orthogonal system $  p _ {0} \dots p _ {n-1} $
 +
by formula (1) with
 +
 
 +
$$
 +
\alpha _ {k}  = \
 +
 
 +
\frac{( A {p _ {k} } , A {p _ {k} } ) }{( A {p _ {k} } , p _ {k} ) }
 +
,\ \
 +
\beta _ {k}  = \
 +
 
 +
\frac{( A {p _ {k} } , p _ {k} ) }{( A {p _ {k-1} } , p _ {k-1} ) }
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063840/m06384021.png" /></td> </tr></table>
+
$$
  
enables one to avoid solving the auxiliary system (2) and gives an explicit expression for the coefficients <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063840/m06384022.png" />: <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063840/m06384023.png" />. Here, to the method of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063840/m06384024.png" />-minimal iteration one can add the iteration
+
enables one to avoid solving the auxiliary system (2) and gives an explicit expression for the coefficients $  c _ {k} $:  
 +
$  c _ {k} = ( b , p _ {k} ) / ( A p _ {k} , p _ {k} ) $.  
 +
Here, to the method of $  A $-
 +
minimal iteration one can add the iteration
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063840/m06384025.png" /></td> </tr></table>
+
$$
 +
x _ {k+1}  = x _ {k} + c _ {k+1} p _ {k+1} ,\ \
 +
k = 0 \dots n - 2 ,\ \
 +
x _ {0}  = c _ {0} p _ {0} ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063840/m06384026.png" />. This modification of the method does not require a repeated use of all the vectors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063840/m06384027.png" />. A minimal iteration method is used also for the solution of the complete eigen value problem and for finding the inverse matrix.
+
where $  x = x _ {n-1} $.  
 +
This modification of the method does not require a repeated use of all the vectors $  p _ {0} \dots p _ {k-1} $.  
 +
A minimal iteration method is used also for the solution of the complete eigen value problem and for finding the inverse matrix.
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  C. Lanczos,  "An iteration method for the solution of the eigenvalue problem of linear differential and integral operators"  ''Res. Nat. Bur. Stand.'' , '''45''' :  4  (1950)  pp. 255–288</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  D.K. Faddeev,  V.N. Faddeeva,  "Computational methods of linear algebra" , Freeman  (1963)  (Translated from Russian)</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  C. Lanczos,  "An iteration method for the solution of the eigenvalue problem of linear differential and integral operators"  ''Res. Nat. Bur. Stand.'' , '''45''' :  4  (1950)  pp. 255–288</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  D.K. Faddeev,  V.N. Faddeeva,  "Computational methods of linear algebra" , Freeman  (1963)  (Translated from Russian)</TD></TR></table>

Latest revision as of 19:56, 15 January 2024


A method for solving linear algebraic equations $ A x = b $, in which the solution $ x $ is represented as a linear combination of basis vectors which are orthogonal in some metric connected with the matrix of the system.

In the case of a symmetric matrix $ A $, the orthogonal system of vectors $ p _ {0} \dots p _ {n-1} $ is constructed using the three-term recurrence formula

$$ \tag{1 } p _ {k+1} = A p _ {k} - \alpha _ {k} p _ {k} - \beta _ {k} p _ {k-1} ,\ \ k = 1 \dots n - 2 , $$

$ p _ {1} = A p _ {0} - \alpha _ {0} p _ {0} $, $ p _ {0} $ an arbitrary vector, where

$$ \alpha _ {k} = \ \frac{( A p _ {k} , p _ {k} ) }{( p _ {k} , p _ {k} ) } ,\ \ k = 0 \dots n - 2 , $$

$$ \beta _ {k} = \frac{( p _ {k} , p _ {k} ) }{( p _ {k-1} , p _ {k-1} ) } ,\ k = 1 \dots n - 2 . $$

The solution of the system $ A x = b $ is found by the formula $ x = \sum_{k=0}^ {n-1} c _ {k} p _ {k} $, and the coefficients $ c _ {k} $ are given as the solutions of the system

$$ \tag{2 } \left . \begin{array}{c} c _ {i-1} + \alpha _ {i} c _ {i} + \beta _ {i+1} c _ {i+1} = \ \frac{( b , p _ {i} ) }{( p _ {i} , p _ {i} ) } ,\ \ i = 1 \dots n - 2 , \\ \alpha _ {0} c _ {0} + \beta _ {1} c _ {1} = \ \frac{( b , p _ {0} ) }{( p _ {0} , p _ {0} ) } , \\ c _ {n-2} + \alpha _ {n-1} c _ {n-1} = \ \frac{( b , p _ {n-1} ) }{( p _ {n-1} , p _ {n-1} ) } . \end{array} \right \} $$

If the orthogonalization algorithm is degenerate, that is, if $ p _ {r} = 0 $ for $ r < n $, one has to choose a new initial vector $ p _ {0} ^ {(1)} $, orthogonal to $ p _ {0} \dots p _ {r-1} $ and one has to complete the system of basis vectors to a complete system.

In the case of a non-symmetric matrix a bi-orthogonal algorithm is used.

If $ A $ is symmetric and positive definite, then constructing an $ A $- orthogonal system $ p _ {0} \dots p _ {n-1} $ by formula (1) with

$$ \alpha _ {k} = \ \frac{( A {p _ {k} } , A {p _ {k} } ) }{( A {p _ {k} } , p _ {k} ) } ,\ \ \beta _ {k} = \ \frac{( A {p _ {k} } , p _ {k} ) }{( A {p _ {k-1} } , p _ {k-1} ) } $$

enables one to avoid solving the auxiliary system (2) and gives an explicit expression for the coefficients $ c _ {k} $: $ c _ {k} = ( b , p _ {k} ) / ( A p _ {k} , p _ {k} ) $. Here, to the method of $ A $- minimal iteration one can add the iteration

$$ x _ {k+1} = x _ {k} + c _ {k+1} p _ {k+1} ,\ \ k = 0 \dots n - 2 ,\ \ x _ {0} = c _ {0} p _ {0} , $$

where $ x = x _ {n-1} $. This modification of the method does not require a repeated use of all the vectors $ p _ {0} \dots p _ {k-1} $. A minimal iteration method is used also for the solution of the complete eigen value problem and for finding the inverse matrix.

References

[1] C. Lanczos, "An iteration method for the solution of the eigenvalue problem of linear differential and integral operators" Res. Nat. Bur. Stand. , 45 : 4 (1950) pp. 255–288
[2] D.K. Faddeev, V.N. Faddeeva, "Computational methods of linear algebra" , Freeman (1963) (Translated from Russian)
How to Cite This Entry:
Minimal iteration method. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Minimal_iteration_method&oldid=16741
This article was adapted from an original article by E.S. Nikolaev (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article