# Iteration methods

for a matrix eigen value problem

Methods for finding the eigen values and eigen vectors (or a principal basis) of a matrix, omitting the preliminary calculation of characteristic polynomials. These methods are substantially different for problems of average size, in which the matrices may entirely be stored in a computer memory, and for problems of high order, in which the information is usually stored in compact form.

The first iteration method was proposed by C.G.J. Jacobi  for the computation of the eigen values and eigen vectors of real symmetric matrices (cf. Rotation method). This method can be generalized to complex Hermitian matrices, and also to the larger class of normal matrices.

There is a number of generalizations of Jacobi's method to matrices of arbitrary form. The typical algorithm of this class consists of a sequence of elementary steps, performed according to the scheme

$$\widetilde{A} _ {k+1} = S _ {k} ^ {-1} A _ {k} S _ {k} ,$$

$$A _ {k+1} = T _ {k} ^ {*} \widetilde{A} {} _ {k+1} ^ {*} T _ {k} .$$

The role of the similarity transformation with (elementary) matrix $S$ consists here in reducing the Euclidean norm of the current matrix $A _ {k}$, i.e. $A _ {k+1}$ is nearer to being normal than $A _ {k}$. For $T _ {k}$ one usually takes a rotation matrix or its unitary analogue. The aim of the similarity transformation with this matrix is, as in Jacobi's classical method, in annihilating off-diagonal entries in a Hermitian matrix related with $A _ {k+1}$, e.g. in the matrix $A _ {k+1} + \widetilde{A} {} _ {k+1} ^ {*}$. The matrices $A _ {k}$ converge, as $k$ increases, to a matrix of diagonal or quasi-diagonal form, and the accumulated products of $S _ {k}$ and $T _ {k}$ give a matrix in which the columns are the approximate eigen vectors, or are base vectors of invariant subspaces of these matrix (cf. , ).

Along with the methods described above, algorithms of another class, so called power methods, have been developed. The most effective method in this direction, and the one that is most often used for solving problems of average size, is the QR-algorithm (cf. , ). The iterations of the QR-algorithm are performed according to the following scheme:

$$\tag{1 } A _ {k} = Q _ {k} R _ {k} ,\ \ A _ {k+1} = R _ {k} Q _ {k} .$$

In it, $Q _ {k}$ is an orthogonal or unitary, and $R _ {k}$ is a right-triangular matrix. In the transition from $A _ {k}$ to $A _ {k+1}$ one first finds the orthogonal-triangular decomposition of $A _ {k}$, after which $Q _ {k}$ and $R _ {k}$ are multiplied in reverse order. If $\widetilde{Q} _ {k} = Q _ {1} \dots Q _ {k}$ and $\widetilde{R} _ {k} = R _ {k} \dots R _ {1}$, then (1) implies

$$\tag{2 } A _ {k+1} = \widetilde{Q} {} _ {k} ^ {*} A _ {1} \widetilde{Q} _ {k} ,\ \ A _ {1} ^ {k} = \widetilde{Q} _ {k} \widetilde{R} _ {k} .$$

Thus, the QR-algorithm generates a sequence of matrices $A _ {k}$ that are orthogonally similar to the initial matrix $A _ {1}$; moreover, the transforming matrix $\widetilde{Q} _ {k}$ is the orthogonal component in the decomposition (2) of $A _ {1} ^ {k}$.

When iterating by the scheme (1), the matrices $A _ {k}$ converge to a right-triangular or quasi-triangular matrix, and the rate of convergence to zero of the subdiagonal entries is determined by the quotients between the absolute values of various eigen values and is, generally speaking, quite slow. To improve the convergence of the QR-algorithm one uses so-called shifts, which leads to the following variant of (1):

$$\tag{3 } A _ {k} - \kappa _ {k} I = Q _ {r} R _ {k} ,\ \ A _ {k+1} = R _ {k} Q _ {k} + \kappa _ {k} I .$$

Usually, by the application of shifts (e.g. $\kappa _ {k} = a _ {nn} ^ {(} k)$) one obtains a faster convergence to zero of the off-diagonal entries of the last row (asymptotically quadratic in the general case and cubic for Hermitian matrices), with corresponding fast stabilization of the diagonal entry $( n , n )$. After the value of this entry has been established, further iteration is performed with the principal submatrix of order $n - 1$, etc. The eigen vectors of the resulting triangular matrix give, after multiplication by the accumulated product of the orthogonal matrices $Q _ {k}$, the eigen vectors of the initial matrix $A _ {1}$.

Iteration by (1) or (3) is applied to matrices that have previously been reduced to the so-called Hessenberg form. One says that a matrix $A$ is in right Hessenberg form if $a _ {ij} = 0$ for $i > j + 1$. The QR-algorithm preserves the Hessenberg form, which generally reduces the cost of each iteration. There are other important possibilities for reducing the amount of calculation, e.g., by implicit use of shifts, which allow one to find complex conjugate eigen values of real matrices without going into complex arithmetic.

In problems of high order (from hundreds to thousands), the matrices are usually sparse, i.e. have a relatively few number of non-zero entries. Moreover, it is usually required to compute not all but only a few eigen values and corresponding eigen vectors. In a typical case the eigen values required are the largest or the smallest in absolute value.

The methods described above, based on similarity transformations, destroy the sparseness of the matrix and are thus not recommended. Methods in which the elementary transformation is multiplication of a matrix by a vector are fundamental to methods of high order. In their further description it is supposed in what follows that the eigen values of the matrix $A$ are enumerated in order of decreasing absolute value:

$$| \lambda _ {1} | \geq \dots \geq | \lambda _ {n} | .$$

The power method for determining an eigen value $\lambda _ {1}$ of maximal absolute value has widest domain of applicability. Starting from an initial approximation $x _ {0}$ one constructs a sequence of normalized vectors

$$\tag{4 } x _ {k+1} = \rho _ {k} A x _ {k} ,\ \ \rho _ {k} = \frac{1}{\| A x _ {k} \| } .$$

This sequence converges to an eigen vector corresponding to $\lambda _ {1}$ if: 1) all elementary divisors of $A$ related to $\lambda _ {1}$ are linear; 2) there are no other eigen values of the same absolute value; and 3) in the decomposition of $x _ {0}$ with respect to a principal basis of $A$ the component in the eigen space corresponding to $\lambda _ {1}$ is non-trivial. However, the convergence of a power method is, as a rule, slow, and is determined by $| \lambda _ {1} | / | \lambda _ {2} |$.

If an approximation $\widetilde \lambda _ {0}$ to the desired eigen value $\lambda _ {0}$ is known, then one obtains a faster convergence by the method of inverse iteration. Instead of (4), one constructs a sequence determined by

$$\tag{5 } ( A - \widetilde \lambda _ {0} I ) x _ {k+1} = \rho _ {k} x _ {k} ,\ \ \| x _ {k} \| = 1 ,\ \ \textrm{ for all } k .$$

The method of inverse iteration is, basically, a power method for $( A - \widetilde \lambda _ {0} I ) ^ {-1}$, which has the strongly-dominating eigen value $1 / ( \widetilde \lambda - \lambda _ {0} )$. However, the realization of (5) requires the solution to a linear system with matrix $A - \widetilde \lambda _ {0} I$, and even when using special methods for sparse system this increases the demands on the computer memory in comparison to the power method.

So-called methods of simultaneous iteration are used in order to calculate groups of eigen values. They are generalizations of the power method, and instead of iterating one vector, one actually constructs iterations under $A$ of an entire subspace. Stewart's method is a typical representative of this group of methods . Suppose that the eigen values of $A$ satisfy

$$| \lambda _ {1} | \geq \dots \geq \ | \lambda _ {r} | > \ | \lambda _ {r+1} | \geq \dots \geq \ | \lambda _ {n} | .$$

One chooses an initial $( n \times r )$- matrix $Q _ {0}$ with orthogonal columns. One then constructs a sequence of matrices $Q _ {k}$, starting from $Q _ {0}$, by

$$\tag{6 } \widetilde{Q} _ {k+1} = A Q _ {k} ,\ \ \widetilde{Q} _ {k+1} = {\tilde{\tilde{Q}} } _ {k+1} R _ {k+1} ,$$

$$Q _ {k+1} = {\tilde{\tilde{Q}} } _ {k+1} W _ {k+1} .$$

In the second formula, $R _ {k+1}$ is a right-triangular $( r \times r )$- matrix, and ${\tilde{\tilde{Q}} } _ {k+1}$ has orthonormal columns. The aim of this decomposition, which need not be computed at every iteration, is that the linear independence of the columns of $Q _ {k}$, which may in a practical sense be destroyed by multiple iteration under $A$, is preserved. The third formula in (6) plays an important role in accelerating the convergence of the method. The orthogonal $( r \times r )$- matrix $W _ {k+1}$ participating in this formula has the following meaning. If $B _ {k+1}$ denotes the $( r \times r )$- matrix $Q _ {k} ^ {*} A Q _ {k}$, then $W _ {k+1}$ reduces $B _ {k+1}$ to Schur form, i.e. to a right-triangular matrix. The matrix $W _ {k+1}$ can be constructed using the QR-algorithm, its columns form a Schur basis of $B _ {k+1}$, characterized by the fact that for each $k$, $1 \leq k \leq r$, the linear span of the first $k$ vectors forms an invariant subspace of $B _ {k+1}$. The matrices $Q _ {k}$ of Stewart's method converge to a matrix $Q$ whose columns form a Schur basis for the invariant subspace of $A$ corresponding to $\lambda _ {1} \dots \lambda _ {r}$. Here, the convergence of the $i$- th column is determined by the quotient $| \lambda _ {r+1} | / | \lambda _ {i} |$.

In the case of a real symmetric matrix, additional possibilities arise. They are related to the treatment of eigen values as stationary points of the Rayleigh functional

$$\tag{7 } \phi ( A , x ) = \ \frac{( A x , x ) }{( x , x ) } ,\ \ x \neq 0 .$$

Methods for unconstrained optimization of (7) can be used to determine extreme points of the spectrum. The theory thus obtained parallels that of the iteration methods for solving positive-definite linear systems, and leads to the same algorithms: coordinate relaxation, sequential overrelaxation, steepest descent, and conjugate gradients (cf. ).

The methods listed find the eigen values "one-after-one" . Lanczos' method is used to determine simultaneously groups of eigen values for a symmetric matrix ; the method was proposed for the tri-diagonalization of a symmetric matrix of order $n$. The following observations lie at the foundation of the method: If a sequence of vectors $p _ {0} , A p _ {0} \dots A ^ {n-1} p _ {0}$ is linearly independent, then the matrix can be reduced to tri-diagonal form $T _ {n}$ in the basis $q _ {1} \dots q _ {n}$ obtained by orthonormalization of this sequence. The vectors $q _ {k}$ are constructed by three-term recurrence formulas:

$$\beta _ {k} q _ {k+1} = \ A q _ {k} - \beta _ {k-1} q _ {k-1} - \alpha _ {k} q _ {k} ,$$

in which the coefficients $\alpha _ {k}$, $\beta _ {k}$, $k = 1 \dots n$, determine $T _ {n}$. After $k$ steps in the orthogonalization process, the vectors $q _ {1} \dots q _ {k}$ as well as the principal submatrix $T _ {k}$ of $T _ {n}$ are known. In many cases, already for $k \ll n$ part of the eigen values of $T _ {k}$ are sufficiently good approximations to some eigen values of $T _ {n}$. The corresponding eigen vectors of $T _ {k}$ can be used to construct approximate eigen vectors to $A$. If the required accuracy has not yet been achieved, one may choose another initial approximation $\widetilde{p} _ {0}$ in the linear span of $q _ {1} \dots q _ {k}$ and repeat the process, making $\widetilde{k}$ steps, etc. Lanczos' method in an iterative treatment consists of this (cf. ).

The search for interior points of the spectrum of a sparse matrix $A$ of high order requires the ability to invert matrices of the form $A - \tau I$ for a sequence $\{ \tau _ {k} \}$, .

How to Cite This Entry:
Iteration methods. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Iteration_methods&oldid=51657
This article was adapted from an original article by Kh.D. Ikramov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article