Difference between revisions of "Jacobi method"
(Category:Numerical analysis and scientific computing) |
Ulf Rehmann (talk | contribs) m (tex encoded by computer) |
||
Line 1: | Line 1: | ||
+ | <!-- | ||
+ | j0540901.png | ||
+ | $#A+1 = 41 n = 0 | ||
+ | $#C+1 = 41 : ~/encyclopedia/old_files/data/J054/J.0504090 Jacobi method | ||
+ | Automatically converted into TeX, above some diagnostics. | ||
+ | Please remove this comment and the {{TEX|auto}} line below, | ||
+ | if TeX found to be correct. | ||
+ | --> | ||
+ | |||
+ | {{TEX|auto}} | ||
+ | {{TEX|done}} | ||
+ | |||
A method for reducing a [[quadratic form]] (cf. also [[Quadratic forms, reduction of|Quadratic forms, reduction of]]) to canonical form by using a triangular transformation of the unknowns; it was suggested by C.G.J. Jacobi (1834) (see [[#References|[1]]]). | A method for reducing a [[quadratic form]] (cf. also [[Quadratic forms, reduction of|Quadratic forms, reduction of]]) to canonical form by using a triangular transformation of the unknowns; it was suggested by C.G.J. Jacobi (1834) (see [[#References|[1]]]). | ||
Let | Let | ||
− | + | $$ | |
+ | f = \ | ||
+ | \sum _ {i, k = 1 } ^ { n } | ||
+ | a _ {ki} x _ {i} y _ {k} $$ | ||
+ | |||
+ | be a given [[Bilinear form|bilinear form]] (not necessarily symmetric) over a field $ P $. | ||
+ | Suppose that its matrix $ A = \| a _ {ki} \| $ | ||
+ | satisfies the condition | ||
+ | |||
+ | $$ \tag{1 } | ||
+ | \Delta _ {k} \neq 0,\ \ | ||
+ | k = 1 \dots n, | ||
+ | $$ | ||
+ | |||
+ | where $ \Delta _ {k} $ | ||
+ | is the [[Minor|minor]] of order $ k $ | ||
+ | in the upper left-hand corner. Then $ f $ | ||
+ | can be written in the form | ||
+ | |||
+ | $$ \tag{2 } | ||
+ | f = \ | ||
+ | \sum _ {k = 1 } ^ { n } | ||
− | + | \frac{u _ {k} v _ {k} }{\Delta _ {k - 1 } \Delta _ {k} } | |
+ | , | ||
+ | $$ | ||
− | + | where $ u _ {1} = \partial f/ \partial y _ {1} $, | |
+ | $ v _ {1} = \partial f/ \partial x _ {1} $, | ||
+ | and for $ k = 2 \dots n $, | ||
− | + | $$ \tag{3a } | |
+ | u _ {k} = \ | ||
+ | \left \| | ||
− | + | \begin{array}{cccc} | |
+ | a _ {11} &\dots &a _ {1k - 1 } & | ||
+ | \frac{\partial f }{\partial y _ {1} } | ||
+ | \\ | ||
+ | \dots &\dots &\dots &\dots \\ | ||
+ | a _ {k1} &\dots &a _ {kk - 1 } & | ||
+ | \frac{\partial f }{\partial y _ {k} } | ||
+ | \\ | ||
+ | \end{array} | ||
+ | \ | ||
+ | \right \| , | ||
+ | $$ | ||
− | + | $$ \tag{3b } | |
+ | v _ {k} = \ | ||
+ | \left \| | ||
− | + | \begin{array}{cccc} | |
+ | a _ {11} &\dots &a _ {1k - 1 } & | ||
+ | \frac{\partial f }{\partial x _ {1} } | ||
+ | \\ | ||
+ | \dots &\dots &\dots &\dots \\ | ||
+ | a _ {1k} &\dots &a _ {k - 1k } & | ||
+ | \frac{\partial f }{\partial x _ {k} } | ||
+ | \\ | ||
+ | \end{array} | ||
+ | \ | ||
+ | \right \| . | ||
+ | $$ | ||
− | + | In particular, if $ A $ | |
+ | is a symmetric matrix satisfying (1) and $ f $ | ||
+ | is the quadratic form with matrix $ A $, | ||
+ | then $ f $ | ||
+ | can be reduced to the canonical form | ||
− | + | $$ \tag{4 } | |
+ | f = \ | ||
+ | \sum _ {k = 1 } ^ { n } | ||
− | + | \frac{u _ {k} ^ {2} }{\Delta _ {k - 1 } \Delta _ {k} } | |
+ | ,\ \ | ||
+ | \Delta _ {0} = 1, | ||
+ | $$ | ||
by using the following transformation of the unknowns: | by using the following transformation of the unknowns: | ||
− | + | $$ \tag{5 } | |
+ | u _ {k} = \ | ||
+ | \left \| | ||
− | + | \begin{array}{cccc} | |
+ | a _ {11} &\dots &a _ {1k - 1 } &{ | ||
+ | \frac{1}{2} | ||
+ | } | ||
+ | \frac{\partial f }{\partial x _ {1} } | ||
+ | \\ | ||
+ | \dots &\dots &\dots &\dots \\ | ||
+ | a _ {k1} &\dots &a _ {kk - 1 } &{ | ||
+ | \frac{1}{2} | ||
+ | } | ||
+ | \frac{\partial f }{\partial x _ {k} } | ||
+ | \\ | ||
+ | \end{array} | ||
+ | \ | ||
+ | \right \| | ||
+ | $$ | ||
− | + | for $ k = 2 \dots n $, | |
+ | and | ||
+ | |||
+ | $$ | ||
+ | u _ {1} = \ | ||
+ | { | ||
+ | \frac{1}{2} | ||
+ | } | ||
+ | |||
+ | \frac{\partial f }{\partial x _ {1} } | ||
+ | . | ||
+ | $$ | ||
This transformation has a triangular matrix, and can be written as | This transformation has a triangular matrix, and can be written as | ||
− | + | $$ \tag{6 } | |
+ | u _ {k} = \ | ||
+ | \sum _ {i = k } ^ { n } | ||
+ | C _ {ki} x _ {i} , | ||
+ | $$ | ||
− | where | + | where $ C _ {ki} $ |
+ | is the minor of $ A $ | ||
+ | that stands in the rows $ 1 \dots k $, | ||
+ | and in the columns $ 1 \dots k - 1, i $. | ||
The formulas (2)–(7) are called Jacobi's formulas. | The formulas (2)–(7) are called Jacobi's formulas. | ||
− | When the matrix of | + | When the matrix of $ f $ |
+ | satisfies only the conditions | ||
+ | |||
+ | $$ | ||
+ | \Delta _ {i} \neq 0,\ \ | ||
+ | i = 1 \dots r, | ||
+ | $$ | ||
+ | |||
+ | $$ | ||
+ | \Delta _ {r + 1 } = \dots = \Delta _ {n} = 0, | ||
+ | $$ | ||
− | + | where $ r $ | |
+ | is the rank of the form, $ f $ | ||
+ | can be reduced to the canonical form | ||
− | + | $$ \tag{7 } | |
+ | f = \ | ||
+ | \sum _ {k = 1 } ^ { r } | ||
− | + | \frac{u _ {k} ^ {2} }{\Delta _ {k - 1 } \Delta _ {k} } | |
− | + | $$ | |
− | (here | + | (here $ \Delta _ {0} = 1 $) |
+ | by a triangular transformation of the unknowns. This reduction can be realized by using the [[Gauss method]] (see [[#References|[1]]]). If, in particular, $ P = \mathbf R $, | ||
+ | then the positive index of inertia of $ f $ | ||
+ | is equal to the number of preservations of sign, and the negative index of inertia is equal to the number of changes of sign in the series of numbers | ||
− | + | $$ | |
+ | 1, \Delta _ {1} \dots \Delta _ {r} . | ||
+ | $$ | ||
See also [[Law of inertia]]. | See also [[Law of inertia]]. | ||
Line 62: | Line 188: | ||
''I.V. Proskuryakov'' | ''I.V. Proskuryakov'' | ||
− | Jacobi's method is a one-step iteration method (cf. [[Simple-iteration method]]) for solving a system of linear algebraic equations | + | Jacobi's method is a one-step iteration method (cf. [[Simple-iteration method]]) for solving a system of linear algebraic equations $ Ax = b $ |
+ | for which a preliminary transformation to the form $ x = Bx + g $ | ||
+ | is realized by the rule: | ||
− | + | $$ | |
+ | B = E - D ^ {-} 1 A,\ \ | ||
+ | g = D _ {-} 1 b ,\ \ | ||
+ | D = ( d _ {ij} ), | ||
+ | $$ | ||
− | + | $$ | |
+ | d _ {ii} = a _ {ii} ,\ i = 1 \dots n; \ d _ {ij} = 0,\ i \neq j. | ||
+ | $$ | ||
''G.D. Kim'' | ''G.D. Kim'' | ||
====Comments==== | ====Comments==== | ||
− | Alternative names occurring in Western literature for this iteration method are: Gauss–Jacobi iteration, point Jacobi iteration, method of successive substitutions, and method of simultaneous displacements. It was already used by C.G.J. Jacobi (1845) (see [[#References|[a3]]]). If the matrix | + | Alternative names occurring in Western literature for this iteration method are: Gauss–Jacobi iteration, point Jacobi iteration, method of successive substitutions, and method of simultaneous displacements. It was already used by C.G.J. Jacobi (1845) (see [[#References|[a3]]]). If the matrix $ A $ |
+ | is irreducibly diagonally dominant, then the method converges for any starting vector (cf. [[#References|[a1]]]). With the introduction of new computer architectures such as vector and parallel computers, Jacobi's method has regained interest because it vectorizes and parallelizes extremely well. Comprehensive surveys of related iterative methods for sparse matrix equations can be found in [[#References|[a2]]], [[#References|[a4]]], [[#References|[a5]]], and [[#References|[a6]]]. | ||
====References==== | ====References==== |
Latest revision as of 22:14, 5 June 2020
A method for reducing a quadratic form (cf. also Quadratic forms, reduction of) to canonical form by using a triangular transformation of the unknowns; it was suggested by C.G.J. Jacobi (1834) (see [1]).
Let
$$ f = \ \sum _ {i, k = 1 } ^ { n } a _ {ki} x _ {i} y _ {k} $$
be a given bilinear form (not necessarily symmetric) over a field $ P $. Suppose that its matrix $ A = \| a _ {ki} \| $ satisfies the condition
$$ \tag{1 } \Delta _ {k} \neq 0,\ \ k = 1 \dots n, $$
where $ \Delta _ {k} $ is the minor of order $ k $ in the upper left-hand corner. Then $ f $ can be written in the form
$$ \tag{2 } f = \ \sum _ {k = 1 } ^ { n } \frac{u _ {k} v _ {k} }{\Delta _ {k - 1 } \Delta _ {k} } , $$
where $ u _ {1} = \partial f/ \partial y _ {1} $, $ v _ {1} = \partial f/ \partial x _ {1} $, and for $ k = 2 \dots n $,
$$ \tag{3a } u _ {k} = \ \left \| \begin{array}{cccc} a _ {11} &\dots &a _ {1k - 1 } & \frac{\partial f }{\partial y _ {1} } \\ \dots &\dots &\dots &\dots \\ a _ {k1} &\dots &a _ {kk - 1 } & \frac{\partial f }{\partial y _ {k} } \\ \end{array} \ \right \| , $$
$$ \tag{3b } v _ {k} = \ \left \| \begin{array}{cccc} a _ {11} &\dots &a _ {1k - 1 } & \frac{\partial f }{\partial x _ {1} } \\ \dots &\dots &\dots &\dots \\ a _ {1k} &\dots &a _ {k - 1k } & \frac{\partial f }{\partial x _ {k} } \\ \end{array} \ \right \| . $$
In particular, if $ A $ is a symmetric matrix satisfying (1) and $ f $ is the quadratic form with matrix $ A $, then $ f $ can be reduced to the canonical form
$$ \tag{4 } f = \ \sum _ {k = 1 } ^ { n } \frac{u _ {k} ^ {2} }{\Delta _ {k - 1 } \Delta _ {k} } ,\ \ \Delta _ {0} = 1, $$
by using the following transformation of the unknowns:
$$ \tag{5 } u _ {k} = \ \left \| \begin{array}{cccc} a _ {11} &\dots &a _ {1k - 1 } &{ \frac{1}{2} } \frac{\partial f }{\partial x _ {1} } \\ \dots &\dots &\dots &\dots \\ a _ {k1} &\dots &a _ {kk - 1 } &{ \frac{1}{2} } \frac{\partial f }{\partial x _ {k} } \\ \end{array} \ \right \| $$
for $ k = 2 \dots n $, and
$$ u _ {1} = \ { \frac{1}{2} } \frac{\partial f }{\partial x _ {1} } . $$
This transformation has a triangular matrix, and can be written as
$$ \tag{6 } u _ {k} = \ \sum _ {i = k } ^ { n } C _ {ki} x _ {i} , $$
where $ C _ {ki} $ is the minor of $ A $ that stands in the rows $ 1 \dots k $, and in the columns $ 1 \dots k - 1, i $.
The formulas (2)–(7) are called Jacobi's formulas.
When the matrix of $ f $ satisfies only the conditions
$$ \Delta _ {i} \neq 0,\ \ i = 1 \dots r, $$
$$ \Delta _ {r + 1 } = \dots = \Delta _ {n} = 0, $$
where $ r $ is the rank of the form, $ f $ can be reduced to the canonical form
$$ \tag{7 } f = \ \sum _ {k = 1 } ^ { r } \frac{u _ {k} ^ {2} }{\Delta _ {k - 1 } \Delta _ {k} } $$
(here $ \Delta _ {0} = 1 $) by a triangular transformation of the unknowns. This reduction can be realized by using the Gauss method (see [1]). If, in particular, $ P = \mathbf R $, then the positive index of inertia of $ f $ is equal to the number of preservations of sign, and the negative index of inertia is equal to the number of changes of sign in the series of numbers
$$ 1, \Delta _ {1} \dots \Delta _ {r} . $$
See also Law of inertia.
References
[1] | F.R. [F.R. Gantmakher] Gantmacher, "The theory of matrices" , 1 , Chelsea, reprint (1977) (Translated from Russian) MR1657129 MR0107649 MR0107648 Zbl 0927.15002 Zbl 0927.15001 Zbl 0085.01001 |
I.V. Proskuryakov
Jacobi's method is a one-step iteration method (cf. Simple-iteration method) for solving a system of linear algebraic equations $ Ax = b $ for which a preliminary transformation to the form $ x = Bx + g $ is realized by the rule:
$$ B = E - D ^ {-} 1 A,\ \ g = D _ {-} 1 b ,\ \ D = ( d _ {ij} ), $$
$$ d _ {ii} = a _ {ii} ,\ i = 1 \dots n; \ d _ {ij} = 0,\ i \neq j. $$
G.D. Kim
Comments
Alternative names occurring in Western literature for this iteration method are: Gauss–Jacobi iteration, point Jacobi iteration, method of successive substitutions, and method of simultaneous displacements. It was already used by C.G.J. Jacobi (1845) (see [a3]). If the matrix $ A $ is irreducibly diagonally dominant, then the method converges for any starting vector (cf. [a1]). With the introduction of new computer architectures such as vector and parallel computers, Jacobi's method has regained interest because it vectorizes and parallelizes extremely well. Comprehensive surveys of related iterative methods for sparse matrix equations can be found in [a2], [a4], [a5], and [a6].
References
[a1] | L. Collatz, "Ueber die Konvergentzkriterien bei Iterationsverfahren für lineare Gleichungssysteme" Math. Z. , 53 (1950) pp. 149–161 |
[a2] | L.A. Hageman, D.M. Young, "Applied iterative methods" , Acad. Press (1981) MR0630192 Zbl 0459.65014 |
[a3] | C.G.J. Jacobi, "Ueber eine neue Auflösungsart der bei der Methode der kleinsten Quadraten vorkommenden lineare Gleichungen" Astr. Nachr. , 22 : 523 (1845) pp. 297–306 |
[a4] | J.M. Ortéga, "Numerical analysis" , Acad. Press (1972) MR0403154 Zbl 0248.65001 |
[a5] | R.S. Varga, "A comparison of the successive over-relaxation method and semi-iterative methods using Chebyshev polynomials" Siam J. Appl. Math. , 5 (1962) pp. 39–47 |
[a6] | D.M. Young, "Iterative solution of large linear systems" , Acad. Press (1971) MR0305568 Zbl 0231.65034 |
Jacobi's method is a rotation method for solving the complete problem of eigen values and eigen vectors for a Hermitian matrix.
G.D. Kim
Comments
The convergence of Jacobi's method has been examined by J. von Neumann and H. Goldstine (see [a1]). Related rotation (or: transformation) methods are Householder's method and Francis' QR method (cf. [a2]).
References
[a1] | C.E. Fröberg, "Introduction to numerical analysis, theory and applications" , Benjamin/Cummings (1985) |
[a2] | G.H. Golub, C.F. van Loan, "Matrix computations" , North Oxford Acad. (1983) MR0733103 Zbl 0559.65011 |
Jacobi method. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Jacobi_method&oldid=34059