Namespaces
Variants
Actions

Difference between revisions of "Jacobi method"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Category:Numerical analysis and scientific computing)
m (tex encoded by computer)
 
Line 1: Line 1:
 +
<!--
 +
j0540901.png
 +
$#A+1 = 41 n = 0
 +
$#C+1 = 41 : ~/encyclopedia/old_files/data/J054/J.0504090 Jacobi method
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
A method for reducing a [[quadratic form]] (cf. also [[Quadratic forms, reduction of|Quadratic forms, reduction of]]) to canonical form by using a triangular transformation of the unknowns; it was suggested by C.G.J. Jacobi (1834) (see [[#References|[1]]]).
 
A method for reducing a [[quadratic form]] (cf. also [[Quadratic forms, reduction of|Quadratic forms, reduction of]]) to canonical form by using a triangular transformation of the unknowns; it was suggested by C.G.J. Jacobi (1834) (see [[#References|[1]]]).
  
 
Let
 
Let
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054090/j0540901.png" /></td> </tr></table>
+
$$
 +
= \
 +
\sum _ {i, k = 1 } ^ { n }
 +
a _ {ki} x _ {i} y _ {k}  $$
 +
 
 +
be a given [[Bilinear form|bilinear form]] (not necessarily symmetric) over a field  $  P $.
 +
Suppose that its matrix  $  A = \| a _ {ki} \| $
 +
satisfies the condition
 +
 
 +
$$ \tag{1 }
 +
\Delta _ {k}  \neq  0,\ \
 +
k = 1 \dots n,
 +
$$
 +
 
 +
where  $  \Delta _ {k} $
 +
is the [[Minor|minor]] of order  $  k $
 +
in the upper left-hand corner. Then  $  f $
 +
can be written in the form
 +
 
 +
$$ \tag{2 }
 +
= \
 +
\sum _ {k = 1 } ^ { n }
  
be a given [[Bilinear form|bilinear form]] (not necessarily symmetric) over a field <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054090/j0540902.png" />. Suppose that its matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054090/j0540903.png" /> satisfies the condition
+
\frac{u _ {k} v _ {k} }{\Delta _ {k - 1 }  \Delta _ {k} }
 +
,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054090/j0540904.png" /></td> <td valign="top" style="width:5%;text-align:right;">(1)</td></tr></table>
+
where  $  u _ {1} = \partial  f/ \partial  y _ {1} $,
 +
$  v _ {1} = \partial  f/ \partial  x _ {1} $,
 +
and for  $  k = 2 \dots n $,
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054090/j0540905.png" /> is the [[Minor|minor]] of order <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054090/j0540906.png" /> in the upper left-hand corner. Then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054090/j0540907.png" /> can be written in the form
+
$$ \tag{3a }
 +
u _ {k}  = \
 +
\left \|
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054090/j0540908.png" /></td> <td valign="top" style="width:5%;text-align:right;">(2)</td></tr></table>
+
\begin{array}{cccc}
 +
a _ {11}  &\dots  &a _ {1k - 1 }  &
 +
\frac{\partial  f }{\partial  y _ {1} }
 +
  \\
 +
\dots  &\dots  &\dots  &\dots  \\
 +
a _ {k1}  &\dots  &a _ {kk - 1 }  &
 +
\frac{\partial  f }{\partial  y _ {k} }
 +
  \\
 +
\end{array}
 +
\
 +
\right \| ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054090/j0540909.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054090/j05409010.png" />, and for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054090/j05409011.png" />,
+
$$ \tag{3b }
 +
v _ {k}  = \
 +
\left \|
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054090/j05409012.png" /></td> <td valign="top" style="width:5%;text-align:right;">(3a)</td></tr></table>
+
\begin{array}{cccc}
 +
a _ {11}  &\dots  &a _ {1k - 1 }  &
 +
\frac{\partial  f }{\partial  x _ {1} }
 +
  \\
 +
\dots  &\dots  &\dots  &\dots  \\
 +
a _ {1k}  &\dots  &a _ {k - 1k }  &
 +
\frac{\partial  f }{\partial  x _ {k} }
 +
  \\
 +
\end{array}
 +
\
 +
\right \| .
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054090/j05409013.png" /></td> <td valign="top" style="width:5%;text-align:right;">(3b)</td></tr></table>
+
In particular, if  $  A $
 +
is a symmetric matrix satisfying (1) and  $  f $
 +
is the quadratic form with matrix  $  A $,
 +
then  $  f $
 +
can be reduced to the canonical form
  
In particular, if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054090/j05409014.png" /> is a symmetric matrix satisfying (1) and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054090/j05409015.png" /> is the quadratic form with matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054090/j05409016.png" />, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054090/j05409017.png" /> can be reduced to the canonical form
+
$$ \tag{4 }
 +
= \
 +
\sum _ {k = 1 } ^ { n }
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054090/j05409018.png" /></td> <td valign="top" style="width:5%;text-align:right;">(4)</td></tr></table>
+
\frac{u _ {k}  ^ {2} }{\Delta _ {k - 1 }  \Delta _ {k} }
 +
,\ \
 +
\Delta _ {0} = 1,
 +
$$
  
 
by using the following transformation of the unknowns:
 
by using the following transformation of the unknowns:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054090/j05409019.png" /></td> <td valign="top" style="width:5%;text-align:right;">(5)</td></tr></table>
+
$$ \tag{5 }
 +
u _ {k}  = \
 +
\left \|
  
for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054090/j05409020.png" />, and
+
\begin{array}{cccc}
 +
a _ {11}  &\dots  &a _ {1k - 1 }  &{
 +
\frac{1}{2}
 +
}
 +
\frac{\partial  f }{\partial  x _ {1} }
 +
  \\
 +
\dots  &\dots  &\dots  &\dots  \\
 +
a _ {k1}  &\dots  &a _ {kk - 1 }  &{
 +
\frac{1}{2}
 +
}
 +
\frac{\partial  f }{\partial  x _ {k} }
 +
  \\
 +
\end{array}
 +
\
 +
\right \|
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054090/j05409021.png" /></td> </tr></table>
+
for  $  k = 2 \dots n $,
 +
and
 +
 
 +
$$
 +
u _ {1}  = \
 +
{
 +
\frac{1}{2}
 +
}
 +
 
 +
\frac{\partial  f }{\partial  x _ {1} }
 +
.
 +
$$
  
 
This transformation has a triangular matrix, and can be written as
 
This transformation has a triangular matrix, and can be written as
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054090/j05409022.png" /></td> <td valign="top" style="width:5%;text-align:right;">(6)</td></tr></table>
+
$$ \tag{6 }
 +
u _ {k}  = \
 +
\sum _ {i = k } ^ { n }
 +
C _ {ki} x _ {i} ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054090/j05409023.png" /> is the minor of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054090/j05409024.png" /> that stands in the rows <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054090/j05409025.png" />, and in the columns <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054090/j05409026.png" />.
+
where $  C _ {ki} $
 +
is the minor of $  A $
 +
that stands in the rows $  1 \dots k $,  
 +
and in the columns $  1 \dots k - 1, i $.
  
 
The formulas (2)–(7) are called Jacobi's formulas.
 
The formulas (2)–(7) are called Jacobi's formulas.
  
When the matrix of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054090/j05409027.png" /> satisfies only the conditions
+
When the matrix of $  f $
 +
satisfies only the conditions
 +
 
 +
$$
 +
\Delta _ {i}  \neq  0,\ \
 +
i = 1 \dots r,
 +
$$
 +
 
 +
$$
 +
\Delta _ {r + 1 }  = \dots =  \Delta _ {n}  =  0,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054090/j05409028.png" /></td> </tr></table>
+
where  $  r $
 +
is the rank of the form,  $  f $
 +
can be reduced to the canonical form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054090/j05409029.png" /></td> </tr></table>
+
$$ \tag{7 }
 +
= \
 +
\sum _ {k = 1 } ^ { r }
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054090/j05409030.png" /> is the rank of the form, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054090/j05409031.png" /> can be reduced to the canonical form
+
\frac{u _ {k}  ^ {2} }{\Delta _ {k - 1 }  \Delta _ {k} }
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054090/j05409032.png" /></td> <td valign="top" style="width:5%;text-align:right;">(7)</td></tr></table>
+
$$
  
(here <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054090/j05409033.png" />) by a triangular transformation of the unknowns. This reduction can be realized by using the [[Gauss method]] (see [[#References|[1]]]). If, in particular, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054090/j05409034.png" />, then the positive index of inertia of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054090/j05409035.png" /> is equal to the number of preservations of sign, and the negative index of inertia is equal to the number of changes of sign in the series of numbers
+
(here $  \Delta _ {0} = 1 $)  
 +
by a triangular transformation of the unknowns. This reduction can be realized by using the [[Gauss method]] (see [[#References|[1]]]). If, in particular, $  P = \mathbf R $,  
 +
then the positive index of inertia of $  f $
 +
is equal to the number of preservations of sign, and the negative index of inertia is equal to the number of changes of sign in the series of numbers
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054090/j05409036.png" /></td> </tr></table>
+
$$
 +
1, \Delta _ {1} \dots \Delta _ {r} .
 +
$$
  
 
See also [[Law of inertia]].
 
See also [[Law of inertia]].
Line 62: Line 188:
 
''I.V. Proskuryakov''
 
''I.V. Proskuryakov''
  
Jacobi's method is a one-step iteration method (cf. [[Simple-iteration method]]) for solving a system of linear algebraic equations <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054090/j05409037.png" /> for which a preliminary transformation to the form <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054090/j05409038.png" /> is realized by the rule:
+
Jacobi's method is a one-step iteration method (cf. [[Simple-iteration method]]) for solving a system of linear algebraic equations $  Ax = b $
 +
for which a preliminary transformation to the form $  x = Bx + g $
 +
is realized by the rule:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054090/j05409039.png" /></td> </tr></table>
+
$$
 +
= E - D  ^ {-} 1 A,\ \
 +
= D _ {-} 1 b ,\ \
 +
= ( d _ {ij} ),
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054090/j05409040.png" /></td> </tr></table>
+
$$
 +
d _ {ii}  = a _ {ii} ,\  i = 1 \dots n; \  d _ {ij}  = 0,\  i \neq j.
 +
$$
  
 
''G.D. Kim''
 
''G.D. Kim''
  
 
====Comments====
 
====Comments====
Alternative names occurring in Western literature for this iteration method are: Gauss–Jacobi iteration, point Jacobi iteration, method of successive substitutions, and method of simultaneous displacements. It was already used by C.G.J. Jacobi (1845) (see [[#References|[a3]]]). If the matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054090/j05409041.png" /> is irreducibly diagonally dominant, then the method converges for any starting vector (cf. [[#References|[a1]]]). With the introduction of new computer architectures such as vector and parallel computers, Jacobi's method has regained interest because it vectorizes and parallelizes extremely well. Comprehensive surveys of related iterative methods for sparse matrix equations can be found in [[#References|[a2]]], [[#References|[a4]]], [[#References|[a5]]], and [[#References|[a6]]].
+
Alternative names occurring in Western literature for this iteration method are: Gauss–Jacobi iteration, point Jacobi iteration, method of successive substitutions, and method of simultaneous displacements. It was already used by C.G.J. Jacobi (1845) (see [[#References|[a3]]]). If the matrix $  A $
 +
is irreducibly diagonally dominant, then the method converges for any starting vector (cf. [[#References|[a1]]]). With the introduction of new computer architectures such as vector and parallel computers, Jacobi's method has regained interest because it vectorizes and parallelizes extremely well. Comprehensive surveys of related iterative methods for sparse matrix equations can be found in [[#References|[a2]]], [[#References|[a4]]], [[#References|[a5]]], and [[#References|[a6]]].
  
 
====References====
 
====References====

Latest revision as of 22:14, 5 June 2020


A method for reducing a quadratic form (cf. also Quadratic forms, reduction of) to canonical form by using a triangular transformation of the unknowns; it was suggested by C.G.J. Jacobi (1834) (see [1]).

Let

$$ f = \ \sum _ {i, k = 1 } ^ { n } a _ {ki} x _ {i} y _ {k} $$

be a given bilinear form (not necessarily symmetric) over a field $ P $. Suppose that its matrix $ A = \| a _ {ki} \| $ satisfies the condition

$$ \tag{1 } \Delta _ {k} \neq 0,\ \ k = 1 \dots n, $$

where $ \Delta _ {k} $ is the minor of order $ k $ in the upper left-hand corner. Then $ f $ can be written in the form

$$ \tag{2 } f = \ \sum _ {k = 1 } ^ { n } \frac{u _ {k} v _ {k} }{\Delta _ {k - 1 } \Delta _ {k} } , $$

where $ u _ {1} = \partial f/ \partial y _ {1} $, $ v _ {1} = \partial f/ \partial x _ {1} $, and for $ k = 2 \dots n $,

$$ \tag{3a } u _ {k} = \ \left \| \begin{array}{cccc} a _ {11} &\dots &a _ {1k - 1 } & \frac{\partial f }{\partial y _ {1} } \\ \dots &\dots &\dots &\dots \\ a _ {k1} &\dots &a _ {kk - 1 } & \frac{\partial f }{\partial y _ {k} } \\ \end{array} \ \right \| , $$

$$ \tag{3b } v _ {k} = \ \left \| \begin{array}{cccc} a _ {11} &\dots &a _ {1k - 1 } & \frac{\partial f }{\partial x _ {1} } \\ \dots &\dots &\dots &\dots \\ a _ {1k} &\dots &a _ {k - 1k } & \frac{\partial f }{\partial x _ {k} } \\ \end{array} \ \right \| . $$

In particular, if $ A $ is a symmetric matrix satisfying (1) and $ f $ is the quadratic form with matrix $ A $, then $ f $ can be reduced to the canonical form

$$ \tag{4 } f = \ \sum _ {k = 1 } ^ { n } \frac{u _ {k} ^ {2} }{\Delta _ {k - 1 } \Delta _ {k} } ,\ \ \Delta _ {0} = 1, $$

by using the following transformation of the unknowns:

$$ \tag{5 } u _ {k} = \ \left \| \begin{array}{cccc} a _ {11} &\dots &a _ {1k - 1 } &{ \frac{1}{2} } \frac{\partial f }{\partial x _ {1} } \\ \dots &\dots &\dots &\dots \\ a _ {k1} &\dots &a _ {kk - 1 } &{ \frac{1}{2} } \frac{\partial f }{\partial x _ {k} } \\ \end{array} \ \right \| $$

for $ k = 2 \dots n $, and

$$ u _ {1} = \ { \frac{1}{2} } \frac{\partial f }{\partial x _ {1} } . $$

This transformation has a triangular matrix, and can be written as

$$ \tag{6 } u _ {k} = \ \sum _ {i = k } ^ { n } C _ {ki} x _ {i} , $$

where $ C _ {ki} $ is the minor of $ A $ that stands in the rows $ 1 \dots k $, and in the columns $ 1 \dots k - 1, i $.

The formulas (2)–(7) are called Jacobi's formulas.

When the matrix of $ f $ satisfies only the conditions

$$ \Delta _ {i} \neq 0,\ \ i = 1 \dots r, $$

$$ \Delta _ {r + 1 } = \dots = \Delta _ {n} = 0, $$

where $ r $ is the rank of the form, $ f $ can be reduced to the canonical form

$$ \tag{7 } f = \ \sum _ {k = 1 } ^ { r } \frac{u _ {k} ^ {2} }{\Delta _ {k - 1 } \Delta _ {k} } $$

(here $ \Delta _ {0} = 1 $) by a triangular transformation of the unknowns. This reduction can be realized by using the Gauss method (see [1]). If, in particular, $ P = \mathbf R $, then the positive index of inertia of $ f $ is equal to the number of preservations of sign, and the negative index of inertia is equal to the number of changes of sign in the series of numbers

$$ 1, \Delta _ {1} \dots \Delta _ {r} . $$

See also Law of inertia.

References

[1] F.R. [F.R. Gantmakher] Gantmacher, "The theory of matrices" , 1 , Chelsea, reprint (1977) (Translated from Russian) MR1657129 MR0107649 MR0107648 Zbl 0927.15002 Zbl 0927.15001 Zbl 0085.01001

I.V. Proskuryakov

Jacobi's method is a one-step iteration method (cf. Simple-iteration method) for solving a system of linear algebraic equations $ Ax = b $ for which a preliminary transformation to the form $ x = Bx + g $ is realized by the rule:

$$ B = E - D ^ {-} 1 A,\ \ g = D _ {-} 1 b ,\ \ D = ( d _ {ij} ), $$

$$ d _ {ii} = a _ {ii} ,\ i = 1 \dots n; \ d _ {ij} = 0,\ i \neq j. $$

G.D. Kim

Comments

Alternative names occurring in Western literature for this iteration method are: Gauss–Jacobi iteration, point Jacobi iteration, method of successive substitutions, and method of simultaneous displacements. It was already used by C.G.J. Jacobi (1845) (see [a3]). If the matrix $ A $ is irreducibly diagonally dominant, then the method converges for any starting vector (cf. [a1]). With the introduction of new computer architectures such as vector and parallel computers, Jacobi's method has regained interest because it vectorizes and parallelizes extremely well. Comprehensive surveys of related iterative methods for sparse matrix equations can be found in [a2], [a4], [a5], and [a6].

References

[a1] L. Collatz, "Ueber die Konvergentzkriterien bei Iterationsverfahren für lineare Gleichungssysteme" Math. Z. , 53 (1950) pp. 149–161
[a2] L.A. Hageman, D.M. Young, "Applied iterative methods" , Acad. Press (1981) MR0630192 Zbl 0459.65014
[a3] C.G.J. Jacobi, "Ueber eine neue Auflösungsart der bei der Methode der kleinsten Quadraten vorkommenden lineare Gleichungen" Astr. Nachr. , 22 : 523 (1845) pp. 297–306
[a4] J.M. Ortéga, "Numerical analysis" , Acad. Press (1972) MR0403154 Zbl 0248.65001
[a5] R.S. Varga, "A comparison of the successive over-relaxation method and semi-iterative methods using Chebyshev polynomials" Siam J. Appl. Math. , 5 (1962) pp. 39–47
[a6] D.M. Young, "Iterative solution of large linear systems" , Acad. Press (1971) MR0305568 Zbl 0231.65034

Jacobi's method is a rotation method for solving the complete problem of eigen values and eigen vectors for a Hermitian matrix.

G.D. Kim

Comments

The convergence of Jacobi's method has been examined by J. von Neumann and H. Goldstine (see [a1]). Related rotation (or: transformation) methods are Householder's method and Francis' QR method (cf. [a2]).

References

[a1] C.E. Fröberg, "Introduction to numerical analysis, theory and applications" , Benjamin/Cummings (1985)
[a2] G.H. Golub, C.F. van Loan, "Matrix computations" , North Oxford Acad. (1983) MR0733103 Zbl 0559.65011
How to Cite This Entry:
Jacobi method. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Jacobi_method&oldid=47458
This article was adapted from an original article by I.V. Proskuryakov, G.D. Kim (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article