Namespaces
Variants
Actions

Difference between revisions of "Relaxation method"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
(Category:Numerical analysis and scientific computing)
 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
 +
{{TEX|done}}
 
''weakening method''
 
''weakening method''
  
A method for the iterative solution of a system of linear algebraic equations <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081140/r0811401.png" />, the elementary step of which consists of varying only one component of the vector of unknowns, the number of variable components being chosen in a specific cyclic order. The relaxation method is most often used for solving systems with a positive-definite matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081140/r0811402.png" />.
+
A method for the iterative solution of a system of linear algebraic equations $Ax=b$, the elementary step of which consists of varying only one component of the vector of unknowns, the number of variable components being chosen in a specific cyclic order. The relaxation method is most often used for solving systems with a positive-definite matrix $A$.
  
If one component of the vector of unknowns <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081140/r0811403.png" /> is varied such that for the new approximation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081140/r0811404.png" /> the quadratic form <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081140/r0811405.png" /> is minimized, then the relaxation method is called a complete relaxation method. If, however, after one elementary step the value of the quadratic form is only reduced and not minimized, the relaxation method is called an incomplete relaxation method.
+
If one component of the vector of unknowns $x^k$ is varied such that for the new approximation $x^{k+1}$ the quadratic form $(A(x^{k+1}-x),x^{k+1}-x)$ is minimized, then the relaxation method is called a complete relaxation method. If, however, after one elementary step the value of the quadratic form is only reduced and not minimized, the relaxation method is called an incomplete relaxation method.
  
The best investigated method is that of successive upper relaxation, where the matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081140/r0811406.png" /> possesses the so-called property <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081140/r0811407.png" /> and is ordered accordingly. A matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081140/r0811408.png" /> is called a matrix possessing property <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081140/r08114011.png" /> if there is a permutation matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081140/r08114012.png" /> such that the matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081140/r08114013.png" /> has the form
+
The best investigated method is that of successive upper relaxation, where the matrix $A$ possesses the so-called property $(A)$ and is ordered accordingly. A matrix $A$ is called a matrix possessing property $(A)$ if there is a permutation matrix $P$ such that the matrix $PAP^T$ has the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081140/r08114014.png" /></td> </tr></table>
+
$$\begin{Vmatrix}D_1&H\\K&D_2\end{Vmatrix},$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081140/r08114015.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081140/r08114016.png" /> are square diagonal matrices.
+
where $D_1$ and $D_2$ are square diagonal matrices.
  
 
The iteration scheme of the relaxation method is as follows:
 
The iteration scheme of the relaxation method is as follows:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081140/r08114017.png" /></td> </tr></table>
+
$$(D+\omega L)x^{k+1}=((1-\omega)D-\omega U)x^k+\omega b\quad k=0,1,\dots,$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081140/r08114018.png" /> is the relaxation parameter, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081140/r08114019.png" /> is the diagonal, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081140/r08114020.png" /> is the lower-triangular and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081140/r08114021.png" /> is the upper-triangular matrix in the decomposition <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081140/r08114022.png" />. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081140/r08114023.png" />, then the method is called an upper relaxation method (over-relaxation), and if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081140/r08114024.png" />, a lower relaxation method. The parameter <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081140/r08114025.png" /> is chosen from the condition of minimization of the [[Spectral radius|spectral radius]] of the matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081140/r08114026.png" /> of transfer from iteration to iteration:
+
where $\omega$ is the relaxation parameter, $D$ is the diagonal, $L$ is the lower-triangular and $U$ is the upper-triangular matrix in the decomposition $A=D+L+U$. If $\omega>1$, then the method is called an upper relaxation method (over-relaxation), and if $\omega\leq1$, a lower relaxation method. The parameter $\omega$ is chosen from the condition of minimization of the [[Spectral radius|spectral radius]] of the matrix $S$ of transfer from iteration to iteration:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081140/r08114027.png" /></td> </tr></table>
+
$$S=(D+\omega L)^{-1}((1-\omega)D-\omega U).$$
  
If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081140/r08114028.png" /> is a symmetric matrix with positive diagonal elements and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081140/r08114029.png" /> are the roots of the determinant equation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081140/r08114030.png" />, then the optimum value of the parameter <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081140/r08114031.png" /> is given by the formula
+
If $A$ is a symmetric matrix with positive diagonal elements and $\lambda_i$ are the roots of the determinant equation $\det(L+\lambda D+U)=0$, then the optimum value of the parameter $\omega$ is given by the formula
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081140/r08114032.png" /></td> </tr></table>
+
$$\omega=\omega_0=\frac{2}{1+\sqrt{1-\lambda_0^2}},$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081140/r08114033.png" />. For <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081140/r08114034.png" /> the spectral radius of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081140/r08114035.png" /> is equal to
+
where $\lambda_0^2=\max\lambda_i^2$. For $\omega=\omega_0$ the spectral radius of $S$ is equal to
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081140/r08114036.png" /></td> </tr></table>
+
$$\omega_0-1=\frac{1-\sqrt{1-\lambda_0^2}}{1+\sqrt{1-\lambda_0^2}}<1.$$
  
Cases are examined where some <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081140/r08114037.png" /> are complex. Block relation methods have been developed.
+
Cases are examined where some $\lambda_i$ are complex. Block relation methods have been developed.
  
 
====References====
 
====References====
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  D.M. Young,  "Iterative methods for solving partial differential equations of elliptic type"  ''Trans. Amer. Math. Soc.'' , '''76''' :  1  (1954)  pp. 92–111</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  D.M. Young,  "Iterative solution of large linear systems" , Acad. Press  (1971)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  W. Wasow,  J. Forsyth,  "Finite-difference methods for partial differential equations" , Wiley  (1960)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  D.K. Faddeev,  V.N. Faddeeva,  "Computational methods of linear algebra" , Freeman  (1963)  (Translated from Russian)</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top">  L.A. Hageman,  D.M. Young,  "Applied iterative methods" , Acad. Press  (1981)</TD></TR></table>
+
<table>
 +
<TR><TD valign="top">[1]</TD> <TD valign="top">  D.M. Young,  "Iterative methods for solving partial differential equations of elliptic type"  ''Trans. Amer. Math. Soc.'' , '''76''' :  1  (1954)  pp. 92–111</TD></TR>
 +
<TR><TD valign="top">[2]</TD> <TD valign="top">  D.M. Young,  "Iterative solution of large linear systems" , Acad. Press  (1971)</TD></TR>
 +
<TR><TD valign="top">[3]</TD> <TD valign="top">  W. Wasow,  J. Forsyth,  "Finite-difference methods for partial differential equations" , Wiley  (1960)</TD></TR>
 +
<TR><TD valign="top">[4]</TD> <TD valign="top">  D.K. Faddeev,  V.N. Faddeeva,  "Computational methods of linear algebra" , Freeman  (1963)  (Translated from Russian)</TD></TR>
 +
<TR><TD valign="top">[5]</TD> <TD valign="top">  L.A. Hageman,  D.M. Young,  "Applied iterative methods" , Acad. Press  (1981)</TD></TR>
 +
</table>
 +
 
 +
[[Category:Numerical analysis and scientific computing]]

Latest revision as of 16:32, 26 October 2014

weakening method

A method for the iterative solution of a system of linear algebraic equations $Ax=b$, the elementary step of which consists of varying only one component of the vector of unknowns, the number of variable components being chosen in a specific cyclic order. The relaxation method is most often used for solving systems with a positive-definite matrix $A$.

If one component of the vector of unknowns $x^k$ is varied such that for the new approximation $x^{k+1}$ the quadratic form $(A(x^{k+1}-x),x^{k+1}-x)$ is minimized, then the relaxation method is called a complete relaxation method. If, however, after one elementary step the value of the quadratic form is only reduced and not minimized, the relaxation method is called an incomplete relaxation method.

The best investigated method is that of successive upper relaxation, where the matrix $A$ possesses the so-called property $(A)$ and is ordered accordingly. A matrix $A$ is called a matrix possessing property $(A)$ if there is a permutation matrix $P$ such that the matrix $PAP^T$ has the form

$$\begin{Vmatrix}D_1&H\\K&D_2\end{Vmatrix},$$

where $D_1$ and $D_2$ are square diagonal matrices.

The iteration scheme of the relaxation method is as follows:

$$(D+\omega L)x^{k+1}=((1-\omega)D-\omega U)x^k+\omega b\quad k=0,1,\dots,$$

where $\omega$ is the relaxation parameter, $D$ is the diagonal, $L$ is the lower-triangular and $U$ is the upper-triangular matrix in the decomposition $A=D+L+U$. If $\omega>1$, then the method is called an upper relaxation method (over-relaxation), and if $\omega\leq1$, a lower relaxation method. The parameter $\omega$ is chosen from the condition of minimization of the spectral radius of the matrix $S$ of transfer from iteration to iteration:

$$S=(D+\omega L)^{-1}((1-\omega)D-\omega U).$$

If $A$ is a symmetric matrix with positive diagonal elements and $\lambda_i$ are the roots of the determinant equation $\det(L+\lambda D+U)=0$, then the optimum value of the parameter $\omega$ is given by the formula

$$\omega=\omega_0=\frac{2}{1+\sqrt{1-\lambda_0^2}},$$

where $\lambda_0^2=\max\lambda_i^2$. For $\omega=\omega_0$ the spectral radius of $S$ is equal to

$$\omega_0-1=\frac{1-\sqrt{1-\lambda_0^2}}{1+\sqrt{1-\lambda_0^2}}<1.$$

Cases are examined where some $\lambda_i$ are complex. Block relation methods have been developed.

References

[1] D.M. Young, "Iterative methods for solving partial differential equations of elliptic type" Trans. Amer. Math. Soc. , 76 : 1 (1954) pp. 92–111
[2] D.M. Young, "Iterative solution of large linear systems" , Acad. Press (1971)
[3] W. Wasow, J. Forsyth, "Finite-difference methods for partial differential equations" , Wiley (1960)
[4] D.K. Faddeev, V.N. Faddeeva, "Computational methods of linear algebra" , Freeman (1963) (Translated from Russian)
[5] L.A. Hageman, D.M. Young, "Applied iterative methods" , Acad. Press (1981)
How to Cite This Entry:
Relaxation method. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Relaxation_method&oldid=17189
This article was adapted from an original article by E.S. Nikolaev (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article