Namespaces
Variants
Actions

Difference between revisions of "Minimal iteration method"

From Encyclopedia of Mathematics
Jump to: navigation, search
m (tex encoded by computer)
(latex details)
 
Line 16: Line 16:
  
 
In the case of a symmetric matrix  $  A $,  
 
In the case of a symmetric matrix  $  A $,  
the orthogonal system of vectors  $  p _ {0} \dots p _ {n-} 1 $
+
the orthogonal system of vectors  $  p _ {0} \dots p _ {n-1} $
 
is constructed using the three-term recurrence formula
 
is constructed using the three-term recurrence formula
  
 
$$ \tag{1 }
 
$$ \tag{1 }
p _ {k+} 1 = \
+
p _ {k+1}  = A p _ {k} - \alpha _ {k} p _ {k} - \beta _ {k} p _ {k-1} ,\ \  
A p _ {k} - \alpha _ {k} p _ {k} - \beta _ {k} p _ {k-} 1 ,\ \  
 
 
k = 1 \dots n - 2 ,
 
k = 1 \dots n - 2 ,
 
$$
 
$$
Line 40: Line 39:
 
\beta _ {k}  =   
 
\beta _ {k}  =   
 
\frac{( p _ {k} , p _ {k} ) }{(
 
\frac{( p _ {k} , p _ {k} ) }{(
p _ {k-} 1 , p _ {k-} 1 ) }
+
p _ {k-1} , p _ {k-1} ) }
 
  ,\  k = 1 \dots n - 2 .
 
  ,\  k = 1 \dots n - 2 .
 
$$
 
$$
  
 
The solution of the system  $  A x = b $
 
The solution of the system  $  A x = b $
is found by the formula  $  x = \sum _ {k=} ^ {n-} 1 c _ {k} p _ {k} $,  
+
is found by the formula  $  x = \sum_{k=0}^ {n-1} c _ {k} p _ {k} $,  
 
and the coefficients  $  c _ {k} $
 
and the coefficients  $  c _ {k} $
 
are given as the solutions of the system
 
are given as the solutions of the system
Line 52: Line 51:
 
\left . \begin{array}{c}
 
\left . \begin{array}{c}
  
c _ {i-} 1 + \alpha _ {i} c _ {i} + \beta _ {i+} 1 c _ {i+} 1
+
c _ {i-1} + \alpha _ {i} c _ {i} + \beta _ {i+1} c _ {i+1}
 
  = \  
 
  = \  
  
Line 66: Line 65:
 
\\
 
\\
  
c _ {n-} 2 + \alpha _ {n-} 1 c _ {n-} 1 = \  
+
c _ {n-2} + \alpha _ {n-1} c _ {n-1}  = \  
  
\frac{( b , p _ {n-} 1 ) }{( p _ {n-} 1 , p _ {n-} 1 ) }
+
\frac{( b , p _ {n-1} ) }{( p _ {n-1} , p _ {n-1} ) }
 
  .
 
  .
 
   
 
   
Line 77: Line 76:
 
If the orthogonalization algorithm is degenerate, that is, if  $  p _ {r} = 0 $
 
If the orthogonalization algorithm is degenerate, that is, if  $  p _ {r} = 0 $
 
for  $  r < n $,  
 
for  $  r < n $,  
one has to choose a new initial vector  $  p _ {0}  ^ {(} 1) $,  
+
one has to choose a new initial vector  $  p _ {0}  ^ {(1)} $,  
orthogonal to  $  p _ {0} \dots p _ {r-} 1 $
+
orthogonal to  $  p _ {0} \dots p _ {r-1} $
 
and one has to complete the system of basis vectors to a complete system.
 
and one has to complete the system of basis vectors to a complete system.
  
Line 85: Line 84:
 
If  $  A $
 
If  $  A $
 
is symmetric and positive definite, then constructing an  $  A $-
 
is symmetric and positive definite, then constructing an  $  A $-
orthogonal system  $  p _ {0} \dots p _ {n-} 1 $
+
orthogonal system  $  p _ {0} \dots p _ {n-1} $
 
by formula (1) with
 
by formula (1) with
  
Line 95: Line 94:
 
\beta _ {k}  = \  
 
\beta _ {k}  = \  
  
\frac{( A {p _ {k} } , p _ {k} ) }{( A {p _ {k-} 1 } , p _ {k-} 1 ) }
+
\frac{( A {p _ {k} } , p _ {k} ) }{( A {p _ {k-1} } , p _ {k-1} ) }
  
 
$$
 
$$
Line 105: Line 104:
  
 
$$  
 
$$  
x _ {k+} 1 =  x _ {k} + c _ {k+} 1 p _ {k+} 1 ,\ \  
+
x _ {k+1}  =  x _ {k} + c _ {k+1} p _ {k+1} ,\ \  
 
k = 0 \dots n - 2 ,\ \  
 
k = 0 \dots n - 2 ,\ \  
 
x _ {0}  =  c _ {0} p _ {0} ,
 
x _ {0}  =  c _ {0} p _ {0} ,
 
$$
 
$$
  
where  $  x = x _ {n-} 1 $.  
+
where  $  x = x _ {n-1} $.  
This modification of the method does not require a repeated use of all the vectors  $  p _ {0} \dots p _ {k-} 1 $.  
+
This modification of the method does not require a repeated use of all the vectors  $  p _ {0} \dots p _ {k-1} $.  
 
A minimal iteration method is used also for the solution of the complete eigen value problem and for finding the inverse matrix.
 
A minimal iteration method is used also for the solution of the complete eigen value problem and for finding the inverse matrix.
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  C. Lanczos,  "An iteration method for the solution of the eigenvalue problem of linear differential and integral operators"  ''Res. Nat. Bur. Stand.'' , '''45''' :  4  (1950)  pp. 255–288</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  D.K. Faddeev,  V.N. Faddeeva,  "Computational methods of linear algebra" , Freeman  (1963)  (Translated from Russian)</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  C. Lanczos,  "An iteration method for the solution of the eigenvalue problem of linear differential and integral operators"  ''Res. Nat. Bur. Stand.'' , '''45''' :  4  (1950)  pp. 255–288</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  D.K. Faddeev,  V.N. Faddeeva,  "Computational methods of linear algebra" , Freeman  (1963)  (Translated from Russian)</TD></TR></table>

Latest revision as of 19:56, 15 January 2024


A method for solving linear algebraic equations $ A x = b $, in which the solution $ x $ is represented as a linear combination of basis vectors which are orthogonal in some metric connected with the matrix of the system.

In the case of a symmetric matrix $ A $, the orthogonal system of vectors $ p _ {0} \dots p _ {n-1} $ is constructed using the three-term recurrence formula

$$ \tag{1 } p _ {k+1} = A p _ {k} - \alpha _ {k} p _ {k} - \beta _ {k} p _ {k-1} ,\ \ k = 1 \dots n - 2 , $$

$ p _ {1} = A p _ {0} - \alpha _ {0} p _ {0} $, $ p _ {0} $ an arbitrary vector, where

$$ \alpha _ {k} = \ \frac{( A p _ {k} , p _ {k} ) }{( p _ {k} , p _ {k} ) } ,\ \ k = 0 \dots n - 2 , $$

$$ \beta _ {k} = \frac{( p _ {k} , p _ {k} ) }{( p _ {k-1} , p _ {k-1} ) } ,\ k = 1 \dots n - 2 . $$

The solution of the system $ A x = b $ is found by the formula $ x = \sum_{k=0}^ {n-1} c _ {k} p _ {k} $, and the coefficients $ c _ {k} $ are given as the solutions of the system

$$ \tag{2 } \left . \begin{array}{c} c _ {i-1} + \alpha _ {i} c _ {i} + \beta _ {i+1} c _ {i+1} = \ \frac{( b , p _ {i} ) }{( p _ {i} , p _ {i} ) } ,\ \ i = 1 \dots n - 2 , \\ \alpha _ {0} c _ {0} + \beta _ {1} c _ {1} = \ \frac{( b , p _ {0} ) }{( p _ {0} , p _ {0} ) } , \\ c _ {n-2} + \alpha _ {n-1} c _ {n-1} = \ \frac{( b , p _ {n-1} ) }{( p _ {n-1} , p _ {n-1} ) } . \end{array} \right \} $$

If the orthogonalization algorithm is degenerate, that is, if $ p _ {r} = 0 $ for $ r < n $, one has to choose a new initial vector $ p _ {0} ^ {(1)} $, orthogonal to $ p _ {0} \dots p _ {r-1} $ and one has to complete the system of basis vectors to a complete system.

In the case of a non-symmetric matrix a bi-orthogonal algorithm is used.

If $ A $ is symmetric and positive definite, then constructing an $ A $- orthogonal system $ p _ {0} \dots p _ {n-1} $ by formula (1) with

$$ \alpha _ {k} = \ \frac{( A {p _ {k} } , A {p _ {k} } ) }{( A {p _ {k} } , p _ {k} ) } ,\ \ \beta _ {k} = \ \frac{( A {p _ {k} } , p _ {k} ) }{( A {p _ {k-1} } , p _ {k-1} ) } $$

enables one to avoid solving the auxiliary system (2) and gives an explicit expression for the coefficients $ c _ {k} $: $ c _ {k} = ( b , p _ {k} ) / ( A p _ {k} , p _ {k} ) $. Here, to the method of $ A $- minimal iteration one can add the iteration

$$ x _ {k+1} = x _ {k} + c _ {k+1} p _ {k+1} ,\ \ k = 0 \dots n - 2 ,\ \ x _ {0} = c _ {0} p _ {0} , $$

where $ x = x _ {n-1} $. This modification of the method does not require a repeated use of all the vectors $ p _ {0} \dots p _ {k-1} $. A minimal iteration method is used also for the solution of the complete eigen value problem and for finding the inverse matrix.

References

[1] C. Lanczos, "An iteration method for the solution of the eigenvalue problem of linear differential and integral operators" Res. Nat. Bur. Stand. , 45 : 4 (1950) pp. 255–288
[2] D.K. Faddeev, V.N. Faddeeva, "Computational methods of linear algebra" , Freeman (1963) (Translated from Russian)
How to Cite This Entry:
Minimal iteration method. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Minimal_iteration_method&oldid=55113
This article was adapted from an original article by E.S. Nikolaev (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article