Difference between revisions of "Anti-eigenvalue"
(Importing text file) |
Ulf Rehmann (talk | contribs) m (tex encoded by computer) |
||
Line 1: | Line 1: | ||
− | + | <!-- | |
+ | a1106301.png | ||
+ | $#A+1 = 32 n = 0 | ||
+ | $#C+1 = 32 : ~/encyclopedia/old_files/data/A110/A.1100630 Anti\AAheigenvalue | ||
+ | Automatically converted into TeX, above some diagnostics. | ||
+ | Please remove this comment and the {{TEX|auto}} line below, | ||
+ | if TeX found to be correct. | ||
+ | --> | ||
− | + | {{TEX|auto}} | |
+ | {{TEX|done}} | ||
− | + | The theory of anti-eigenvalues is a [[Spectral theory|spectral theory]] based upon the turning angles of a matrix or operator $ A $. | |
+ | (See [[Eigen value|Eigen value]] for the spectral theory of stretchings, rather than turnings, of a matrix or operator.) | ||
− | From (a1) one has immediately the notion of the angle | + | For a strongly accretive operator $ A $, |
+ | i.e., $ { \mathop{\rm Re} } \langle {Ax,x } \rangle \geq m \| x \| ^ {2} $, | ||
+ | $ m > 0 $, | ||
+ | the first anti-eigenvalue $ \mu $ | ||
+ | is defined by | ||
+ | |||
+ | $$ \tag{a1 } | ||
+ | \mu \equiv \cos A = \inf _ {x \in D ( A ) } { | ||
+ | \frac{ { \mathop{\rm Re} } \left \langle {Ax, x } \right \rangle }{\left \| {Ax } \right \| \left \| x \right \| } | ||
+ | } . | ||
+ | $$ | ||
+ | |||
+ | From (a1) one has immediately the notion of the angle $ \phi ( A ) $: | ||
+ | the largest angle through which $ A $ | ||
+ | may turn a vector. Any corresponding vector $ x $ | ||
+ | which is turned by that angle is called a first anti-eigenvector. It turns out that, in general, the first anti-eigenvectors come in pairs. Two important early results were the minmax theorem and the Euler equation. | ||
==Minmax theorem.== | ==Minmax theorem.== | ||
− | For any strongly accretive bounded operator | + | For any strongly accretive bounded operator $ A $ |
+ | on a [[Hilbert space|Hilbert space]] $ X $, | ||
− | + | $$ \tag{a2 } | |
+ | \sup _ {\left \| x \right \| = 1 } \inf _ {- \infty < \epsilon < \infty } \left \| {( \epsilon A - I ) x } \right \| ^ {2} = | ||
+ | $$ | ||
− | + | $$ | |
+ | = | ||
+ | \inf _ {- \infty < \epsilon < \infty } \sup _ {\left \| x \right \| = 1 } \left \| {( \epsilon A - I ) x } \right \| ^ {2} . | ||
+ | $$ | ||
Using the minmax theorem, the right-hand side of (a2) is seen to define | Using the minmax theorem, the right-hand side of (a2) is seen to define | ||
− | + | $$ \tag{a3 } | |
+ | \nu = \sin A = \inf _ {\epsilon > 0 } \left \| {\epsilon A - I } \right \| | ||
+ | $$ | ||
− | in such a way that | + | in such a way that $ \cos ^ {2} A + \sin ^ {2} A = 1 $. |
+ | This implies an operator trigonometry (see [[#References|[a1]]]). | ||
==Euler equation.== | ==Euler equation.== | ||
− | For any strongly accretive bounded operator | + | For any strongly accretive bounded operator $ A $ |
+ | on a Hilbert space $ X $, | ||
+ | the Euler equation for the anti-eigenvalue functional $ \mu $ | ||
+ | in (a1) is | ||
− | + | $$ \tag{a4 } | |
+ | 2 \left \| {Ax } \right \| ^ {2} \left \| x \right \| ^ {2} ( { \mathop{\rm Re} } A ) x - \left \| x \right \| ^ {2} { \mathop{\rm Re} } \left \langle {Ax,x } \right \rangle A ^ {*} Ax + | ||
+ | $$ | ||
− | + | $$ | |
+ | - \left \| {Ax } \right \| ^ {2} { \mathop{\rm Re} } \left \langle {Ax,x } \right \rangle x = 0. | ||
+ | $$ | ||
− | When | + | When $ A $ |
+ | is a [[Normal operator|normal operator]], (a4) is satisfied not only by the first anti-eigenvectors of $ A $, | ||
+ | but by all eigenvectors of $ A $. | ||
+ | Therefore the Euler equation may be viewed as a significant extension of the Rayleigh–Ritz theory for the variational characterization of eigenvalues of a self-adjoint or normal operator $ A $. | ||
+ | The eigenvectors maximize the variational quotient (a1). The anti-eigenvectors minimize it. See [[#References|[a2]]], [[#References|[a3]]]. | ||
− | The theory of anti-eigenvalues has been applied recently (from 1990 onward) to gradient and iterative methods for the solution of linear systems | + | The theory of anti-eigenvalues has been applied recently (from 1990 onward) to gradient and iterative methods for the solution of linear systems $ Ax = b $; |
+ | see [[#References|[a5]]], [[#References|[a6]]]. For example, the Kantorovich convergence rate for steepest descent, | ||
− | + | $$ | |
+ | E _ {A} ( x _ {k + 1 } ) \leq \left ( 1 -4 \lambda _ {1} \lambda _ {n} ( \lambda _ {1} + \lambda _ {n} ) ^ {-2 } \right ) E _ {A} ( x _ {k} ) , | ||
+ | $$ | ||
− | where | + | where $ E _ {A} $ |
+ | denotes the $ A $- | ||
+ | inner-product error $ \langle {( x - x ^ {*} ) , A ( x - x ^ {*} ) } \rangle $, | ||
+ | becomes | ||
− | + | $$ | |
+ | E _ {A} ( x _ {k + 1 } ) \leq ( \sin ^ {2} A ) E _ {A} ( x _ {k} ) . | ||
+ | $$ | ||
− | Thus, the Kantorovich error rate is trigonometric. Similar trigonometric convergence bounds hold for conjugate gradient and related more sophisticated algorithms [[#References|[a4]]]. Even the basic Richardson method | + | Thus, the Kantorovich error rate is trigonometric. Similar trigonometric convergence bounds hold for conjugate gradient and related more sophisticated algorithms [[#References|[a4]]]. Even the basic Richardson method $ x _ {k + 1 } = x _ {k} + \alpha ( b - Ax _ {k} ) $( |
+ | cf. also [[Richardson extrapolation|Richardson extrapolation]]) may be seen to have optimal convergence rate $ \rho _ {\textrm{ opt } } = \sin A $. | ||
+ | For further information, see [[#References|[a5]]], [[#References|[a6]]]. | ||
====References==== | ====References==== | ||
<table><TR><TD valign="top">[a1]</TD> <TD valign="top"> K. Gustafson, "Operator trigonometry" ''Linear Multilinear Alg.'' , '''37''' (1994) pp. 139–159</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top"> K. Gustafson, "Antieigenvalues" ''Linear Alg. & Its Appl.'' , '''208/209''' (1994) pp. 437–454</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top"> K. Gustafson, "Matrix trigonometry" ''Linear Alg. & Its Appl.'' , '''217''' (1995) pp. 117–140</TD></TR><TR><TD valign="top">[a4]</TD> <TD valign="top"> K. Gustafson, "Operator trigonometry of iterative methods" ''Numerical Linear Alg. Appl.'' , '''to appear''' (1997)</TD></TR><TR><TD valign="top">[a5]</TD> <TD valign="top"> K. Gustafson, "Lectures on computational fluid dynamics, mathematical physics, and linear algebra" , Kaigai & World Sci. (1996/7)</TD></TR><TR><TD valign="top">[a6]</TD> <TD valign="top"> K. Gustafson, D. Rao, "Numerical range" , Springer (1997)</TD></TR></table> | <table><TR><TD valign="top">[a1]</TD> <TD valign="top"> K. Gustafson, "Operator trigonometry" ''Linear Multilinear Alg.'' , '''37''' (1994) pp. 139–159</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top"> K. Gustafson, "Antieigenvalues" ''Linear Alg. & Its Appl.'' , '''208/209''' (1994) pp. 437–454</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top"> K. Gustafson, "Matrix trigonometry" ''Linear Alg. & Its Appl.'' , '''217''' (1995) pp. 117–140</TD></TR><TR><TD valign="top">[a4]</TD> <TD valign="top"> K. Gustafson, "Operator trigonometry of iterative methods" ''Numerical Linear Alg. Appl.'' , '''to appear''' (1997)</TD></TR><TR><TD valign="top">[a5]</TD> <TD valign="top"> K. Gustafson, "Lectures on computational fluid dynamics, mathematical physics, and linear algebra" , Kaigai & World Sci. (1996/7)</TD></TR><TR><TD valign="top">[a6]</TD> <TD valign="top"> K. Gustafson, D. Rao, "Numerical range" , Springer (1997)</TD></TR></table> |
Latest revision as of 18:47, 5 April 2020
The theory of anti-eigenvalues is a spectral theory based upon the turning angles of a matrix or operator $ A $.
(See Eigen value for the spectral theory of stretchings, rather than turnings, of a matrix or operator.)
For a strongly accretive operator $ A $, i.e., $ { \mathop{\rm Re} } \langle {Ax,x } \rangle \geq m \| x \| ^ {2} $, $ m > 0 $, the first anti-eigenvalue $ \mu $ is defined by
$$ \tag{a1 } \mu \equiv \cos A = \inf _ {x \in D ( A ) } { \frac{ { \mathop{\rm Re} } \left \langle {Ax, x } \right \rangle }{\left \| {Ax } \right \| \left \| x \right \| } } . $$
From (a1) one has immediately the notion of the angle $ \phi ( A ) $: the largest angle through which $ A $ may turn a vector. Any corresponding vector $ x $ which is turned by that angle is called a first anti-eigenvector. It turns out that, in general, the first anti-eigenvectors come in pairs. Two important early results were the minmax theorem and the Euler equation.
Minmax theorem.
For any strongly accretive bounded operator $ A $ on a Hilbert space $ X $,
$$ \tag{a2 } \sup _ {\left \| x \right \| = 1 } \inf _ {- \infty < \epsilon < \infty } \left \| {( \epsilon A - I ) x } \right \| ^ {2} = $$
$$ = \inf _ {- \infty < \epsilon < \infty } \sup _ {\left \| x \right \| = 1 } \left \| {( \epsilon A - I ) x } \right \| ^ {2} . $$
Using the minmax theorem, the right-hand side of (a2) is seen to define
$$ \tag{a3 } \nu = \sin A = \inf _ {\epsilon > 0 } \left \| {\epsilon A - I } \right \| $$
in such a way that $ \cos ^ {2} A + \sin ^ {2} A = 1 $. This implies an operator trigonometry (see [a1]).
Euler equation.
For any strongly accretive bounded operator $ A $ on a Hilbert space $ X $, the Euler equation for the anti-eigenvalue functional $ \mu $ in (a1) is
$$ \tag{a4 } 2 \left \| {Ax } \right \| ^ {2} \left \| x \right \| ^ {2} ( { \mathop{\rm Re} } A ) x - \left \| x \right \| ^ {2} { \mathop{\rm Re} } \left \langle {Ax,x } \right \rangle A ^ {*} Ax + $$
$$ - \left \| {Ax } \right \| ^ {2} { \mathop{\rm Re} } \left \langle {Ax,x } \right \rangle x = 0. $$
When $ A $ is a normal operator, (a4) is satisfied not only by the first anti-eigenvectors of $ A $, but by all eigenvectors of $ A $. Therefore the Euler equation may be viewed as a significant extension of the Rayleigh–Ritz theory for the variational characterization of eigenvalues of a self-adjoint or normal operator $ A $. The eigenvectors maximize the variational quotient (a1). The anti-eigenvectors minimize it. See [a2], [a3].
The theory of anti-eigenvalues has been applied recently (from 1990 onward) to gradient and iterative methods for the solution of linear systems $ Ax = b $; see [a5], [a6]. For example, the Kantorovich convergence rate for steepest descent,
$$ E _ {A} ( x _ {k + 1 } ) \leq \left ( 1 -4 \lambda _ {1} \lambda _ {n} ( \lambda _ {1} + \lambda _ {n} ) ^ {-2 } \right ) E _ {A} ( x _ {k} ) , $$
where $ E _ {A} $ denotes the $ A $- inner-product error $ \langle {( x - x ^ {*} ) , A ( x - x ^ {*} ) } \rangle $, becomes
$$ E _ {A} ( x _ {k + 1 } ) \leq ( \sin ^ {2} A ) E _ {A} ( x _ {k} ) . $$
Thus, the Kantorovich error rate is trigonometric. Similar trigonometric convergence bounds hold for conjugate gradient and related more sophisticated algorithms [a4]. Even the basic Richardson method $ x _ {k + 1 } = x _ {k} + \alpha ( b - Ax _ {k} ) $( cf. also Richardson extrapolation) may be seen to have optimal convergence rate $ \rho _ {\textrm{ opt } } = \sin A $. For further information, see [a5], [a6].
References
[a1] | K. Gustafson, "Operator trigonometry" Linear Multilinear Alg. , 37 (1994) pp. 139–159 |
[a2] | K. Gustafson, "Antieigenvalues" Linear Alg. & Its Appl. , 208/209 (1994) pp. 437–454 |
[a3] | K. Gustafson, "Matrix trigonometry" Linear Alg. & Its Appl. , 217 (1995) pp. 117–140 |
[a4] | K. Gustafson, "Operator trigonometry of iterative methods" Numerical Linear Alg. Appl. , to appear (1997) |
[a5] | K. Gustafson, "Lectures on computational fluid dynamics, mathematical physics, and linear algebra" , Kaigai & World Sci. (1996/7) |
[a6] | K. Gustafson, D. Rao, "Numerical range" , Springer (1997) |
Anti-eigenvalue. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Anti-eigenvalue&oldid=12538