Namespaces
Variants
Actions

Difference between revisions of "Partial problem of eigen values"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (tex encoded by computer)
 
Line 1: Line 1:
 +
<!--
 +
p0716801.png
 +
$#A+1 = 19 n = 0
 +
$#C+1 = 19 : ~/encyclopedia/old_files/data/P071/P.0701680 Partial problem of eigen values
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
The problem of calculating one or a few eigen values of a square matrix, usually real or complex, as well as the corresponding eigen vectors.
 
The problem of calculating one or a few eigen values of a square matrix, usually real or complex, as well as the corresponding eigen vectors.
  
The following variants of the partial problem of eigen values are encountered most often in practice: 1) find the groups of in absolute value smallest (or largest) eigen values; 2) find the groups of eigen values closest to a given number <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071680/p0716801.png" />; and 3) find the points of the spectrum that lie in a given interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071680/p0716802.png" /> (for a symmetric or Hermitian matrix).
+
The following variants of the partial problem of eigen values are encountered most often in practice: 1) find the groups of in absolute value smallest (or largest) eigen values; 2) find the groups of eigen values closest to a given number $  a $;  
 +
and 3) find the points of the spectrum that lie in a given interval $  ( \alpha , \beta ) $(
 +
for a symmetric or Hermitian matrix).
  
The majority of methods for solving the partial problem of eigen values for general matrices are based on the idea of power iteration or on a variant of it — inverse iteration (cf. [[Iteration methods|Iteration methods]] for the solution of matrix eigen value problems): If a matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071680/p0716803.png" /> has an eigen value <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071680/p0716804.png" /> that is dominating in absolute value and if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071680/p0716805.png" /> is the corresponding eigen vector, then for almost any vector <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071680/p0716806.png" /> the sequence <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071680/p0716807.png" /> converges to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071680/p0716808.png" />. If the eigen value that is smallest in absolute value is required (problem 1), then power iteration is carried out on the matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071680/p0716809.png" /> (the method of inverse iteration); to find an eigen value closest to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071680/p07168010.png" /> (problem 2), use the matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071680/p07168011.png" /> (the method of inverse iteration with a shift).
+
The majority of methods for solving the partial problem of eigen values for general matrices are based on the idea of power iteration or on a variant of it — inverse iteration (cf. [[Iteration methods|Iteration methods]] for the solution of matrix eigen value problems): If a matrix $  A $
 +
has an eigen value $  \lambda _  \max  $
 +
that is dominating in absolute value and if $  x _  \max  $
 +
is the corresponding eigen vector, then for almost any vector $  v $
 +
the sequence $  v , A v , A  ^ {2} v \dots $
 +
converges to $  x _  \max  $.  
 +
If the eigen value that is smallest in absolute value is required (problem 1), then power iteration is carried out on the matrix $  A  ^ {-} 1 $(
 +
the method of inverse iteration); to find an eigen value closest to $  a $(
 +
problem 2), use the matrix $  ( A - aI )  ^ {-} 1 $(
 +
the method of inverse iteration with a shift).
  
The most important special case of the partial problem of eigen values is that of computing the eigen values and corresponding eigen vectors of a real symmetric or complex Hermitian matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071680/p07168012.png" />. Here there are many effective numerical methods for solving the partial problem of eigen values, based on entirely different ideas (cf. [[#References|[1]]]). Among these are: methods that use the extremal properties of the Rayleigh functional (the greatest and the smallest eigen values of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071680/p07168013.png" /> realize, respectively, the maximum and minimum of the Rayleigh quotient <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071680/p07168014.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071680/p07168015.png" />; these extremes are achieved on the corresponding eigen vectors); methods that apply Silvester's law of inertia (the method of Sturm sequences and, more generally, spectral-division methods); and finally, methods based on the approximation properties of Krylov subspaces, that is, of the linear spans of systems of the form <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071680/p07168016.png" /> (the Lanczos method and its variants). The choice of which method to use will depend on such considerations as the order of the problem, the degree of the matrix, the availability of a band structure, accessible information about the spectrum, etc.
+
The most important special case of the partial problem of eigen values is that of computing the eigen values and corresponding eigen vectors of a real symmetric or complex Hermitian matrix $  A $.  
 +
Here there are many effective numerical methods for solving the partial problem of eigen values, based on entirely different ideas (cf. [[#References|[1]]]). Among these are: methods that use the extremal properties of the Rayleigh functional (the greatest and the smallest eigen values of $  A $
 +
realize, respectively, the maximum and minimum of the Rayleigh quotient $  \phi ( A , z ) = ( A z , z ) / ( z , z ) $,  
 +
$  z \neq 0 $;  
 +
these extremes are achieved on the corresponding eigen vectors); methods that apply Silvester's law of inertia (the method of Sturm sequences and, more generally, spectral-division methods); and finally, methods based on the approximation properties of Krylov subspaces, that is, of the linear spans of systems of the form $  v , A v \dots A  ^ {k-} 1 v $(
 +
the Lanczos method and its variants). The choice of which method to use will depend on such considerations as the order of the problem, the degree of the matrix, the availability of a band structure, accessible information about the spectrum, etc.
  
 
Methods for solving the partial problem of eigen values, both in the general and in the Hermitian case, can be divided into group methods and sequential methods. Group methods are characterized by the fact that the desired eigen values (and the corresponding eigen vectors) are calculated to some extent in parallel. These include numerous methods of simultaneous iteration, the Lanczos method and spectral-division methods.
 
Methods for solving the partial problem of eigen values, both in the general and in the Hermitian case, can be divided into group methods and sequential methods. Group methods are characterized by the fact that the desired eigen values (and the corresponding eigen vectors) are calculated to some extent in parallel. These include numerous methods of simultaneous iteration, the Lanczos method and spectral-division methods.
  
In sequential methods the eigen values are determined one-by-one. Here, from the second eigen value onwards it becomes necessary to ensure that the iterations do not converge to solutions that have already been found. Related to this there are different methods of exhaustion (or deflation) [[#References|[2]]]. In some cases the exhaustion leads to the construction of a matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071680/p07168017.png" /> in which the eigen values of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071680/p07168018.png" /> that have already been calculated correspond to zero; up to those eigen values the spectra of the two matrices coincide, as do their eigen vectors. In other cases the result of the exhaustion is a splitting of the matrix, after which the subsequent eigen values may be obtained using matrices of smaller order. In yet others the iteration method is carried out on a non-modified matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071680/p07168019.png" />, while orthogonalization is carried out with respect to the previously-calculated eigen vectors. Exhaustion techniques can also be used with group methods.
+
In sequential methods the eigen values are determined one-by-one. Here, from the second eigen value onwards it becomes necessary to ensure that the iterations do not converge to solutions that have already been found. Related to this there are different methods of exhaustion (or deflation) [[#References|[2]]]. In some cases the exhaustion leads to the construction of a matrix $  \widetilde{A}  $
 +
in which the eigen values of $  A $
 +
that have already been calculated correspond to zero; up to those eigen values the spectra of the two matrices coincide, as do their eigen vectors. In other cases the result of the exhaustion is a splitting of the matrix, after which the subsequent eigen values may be obtained using matrices of smaller order. In yet others the iteration method is carried out on a non-modified matrix $  A $,  
 +
while orthogonalization is carried out with respect to the previously-calculated eigen vectors. Exhaustion techniques can also be used with group methods.
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  B.N. Parlett,  "The symmetric eigenvalue problem" , Prentice-Hall  (1980)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  J.H. Wilkinson,  "The algebraic eigenvalue problem" , Oxford Univ. Press  (1969)</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  B.N. Parlett,  "The symmetric eigenvalue problem" , Prentice-Hall  (1980)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  J.H. Wilkinson,  "The algebraic eigenvalue problem" , Oxford Univ. Press  (1969)</TD></TR></table>
 
 
  
 
====Comments====
 
====Comments====

Latest revision as of 08:05, 6 June 2020


The problem of calculating one or a few eigen values of a square matrix, usually real or complex, as well as the corresponding eigen vectors.

The following variants of the partial problem of eigen values are encountered most often in practice: 1) find the groups of in absolute value smallest (or largest) eigen values; 2) find the groups of eigen values closest to a given number $ a $; and 3) find the points of the spectrum that lie in a given interval $ ( \alpha , \beta ) $( for a symmetric or Hermitian matrix).

The majority of methods for solving the partial problem of eigen values for general matrices are based on the idea of power iteration or on a variant of it — inverse iteration (cf. Iteration methods for the solution of matrix eigen value problems): If a matrix $ A $ has an eigen value $ \lambda _ \max $ that is dominating in absolute value and if $ x _ \max $ is the corresponding eigen vector, then for almost any vector $ v $ the sequence $ v , A v , A ^ {2} v \dots $ converges to $ x _ \max $. If the eigen value that is smallest in absolute value is required (problem 1), then power iteration is carried out on the matrix $ A ^ {-} 1 $( the method of inverse iteration); to find an eigen value closest to $ a $( problem 2), use the matrix $ ( A - aI ) ^ {-} 1 $( the method of inverse iteration with a shift).

The most important special case of the partial problem of eigen values is that of computing the eigen values and corresponding eigen vectors of a real symmetric or complex Hermitian matrix $ A $. Here there are many effective numerical methods for solving the partial problem of eigen values, based on entirely different ideas (cf. [1]). Among these are: methods that use the extremal properties of the Rayleigh functional (the greatest and the smallest eigen values of $ A $ realize, respectively, the maximum and minimum of the Rayleigh quotient $ \phi ( A , z ) = ( A z , z ) / ( z , z ) $, $ z \neq 0 $; these extremes are achieved on the corresponding eigen vectors); methods that apply Silvester's law of inertia (the method of Sturm sequences and, more generally, spectral-division methods); and finally, methods based on the approximation properties of Krylov subspaces, that is, of the linear spans of systems of the form $ v , A v \dots A ^ {k-} 1 v $( the Lanczos method and its variants). The choice of which method to use will depend on such considerations as the order of the problem, the degree of the matrix, the availability of a band structure, accessible information about the spectrum, etc.

Methods for solving the partial problem of eigen values, both in the general and in the Hermitian case, can be divided into group methods and sequential methods. Group methods are characterized by the fact that the desired eigen values (and the corresponding eigen vectors) are calculated to some extent in parallel. These include numerous methods of simultaneous iteration, the Lanczos method and spectral-division methods.

In sequential methods the eigen values are determined one-by-one. Here, from the second eigen value onwards it becomes necessary to ensure that the iterations do not converge to solutions that have already been found. Related to this there are different methods of exhaustion (or deflation) [2]. In some cases the exhaustion leads to the construction of a matrix $ \widetilde{A} $ in which the eigen values of $ A $ that have already been calculated correspond to zero; up to those eigen values the spectra of the two matrices coincide, as do their eigen vectors. In other cases the result of the exhaustion is a splitting of the matrix, after which the subsequent eigen values may be obtained using matrices of smaller order. In yet others the iteration method is carried out on a non-modified matrix $ A $, while orthogonalization is carried out with respect to the previously-calculated eigen vectors. Exhaustion techniques can also be used with group methods.

References

[1] B.N. Parlett, "The symmetric eigenvalue problem" , Prentice-Hall (1980)
[2] J.H. Wilkinson, "The algebraic eigenvalue problem" , Oxford Univ. Press (1969)

Comments

Cf. also Complete problem of eigen values.

References

[a1] G.H. Golub, C.F. van Loan, "Matrix computations" , Johns Hopkins Univ. Press (1983)
How to Cite This Entry:
Partial problem of eigen values. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Partial_problem_of_eigen_values&oldid=48136
This article was adapted from an original article by Kh.D. Ikramov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article