Namespaces
Variants
Actions

Difference between revisions of "Variable-directions method"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (tex encoded by computer)
Line 1: Line 1:
 +
<!--
 +
v0960801.png
 +
$#A+1 = 46 n = 0
 +
$#C+1 = 46 : ~/encyclopedia/old_files/data/V096/V.0906080 Variable\AAhdirections method
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
An iterative method (cf. [[Iteration methods|Iteration methods]]) for solving systems of linear or non-linear equations arising in difference or projection-difference methods for the approximate solution of, for example, boundary value problems for partial differential equations of elliptic type.
 
An iterative method (cf. [[Iteration methods|Iteration methods]]) for solving systems of linear or non-linear equations arising in difference or projection-difference methods for the approximate solution of, for example, boundary value problems for partial differential equations of elliptic type.
  
Let, for example, there be two spatial variables and a sequence of square grids <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096080/v0960801.png" /> with step <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096080/v0960802.png" /> and nodes <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096080/v0960803.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096080/v0960804.png" /> is a vector with integer components. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096080/v0960805.png" /> be the set of nodes <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096080/v0960806.png" /> at which one seeks a solution to a difference or projection-difference problem written in the form of the operator equation
+
Let, for example, there be two spatial variables and a sequence of square grids $  \omega _ {h} $
 +
with step $  h > 0 $
 +
and nodes $  x _ {i} \equiv ( i _ {1} h , i _ {2} h) $,  
 +
where $  i \equiv ( i _ {1} , i _ {2} ) $
 +
is a vector with integer components. Let $  \Omega _ {h} $
 +
be the set of nodes $  \omega _ {h} $
 +
at which one seeks a solution to a difference or projection-difference problem written in the form of the operator equation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096080/v0960807.png" /></td> </tr></table>
+
$$
 +
L _ {h} ( u _ {h} )  = f _ {h}  $$
  
in the Euclidean space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096080/v0960808.png" />. The latter can be identified with the space of functions given at the nodes of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096080/v0960809.png" />; the dimension of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096080/v09608010.png" /> coincides with the number of points <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096080/v09608011.png" /> from <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096080/v09608012.png" />.
+
in the Euclidean space $  H _ {h} $.  
 +
The latter can be identified with the space of functions given at the nodes of $  \Omega _ {h} $;  
 +
the dimension of $  H _ {h} $
 +
coincides with the number of points $  N _ {h} $
 +
from $  \Omega _ {h} $.
  
 
Let
 
Let
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096080/v09608013.png" /></td> <td valign="top" style="width:5%;text-align:right;">(1)</td></tr></table>
+
$$ \tag{1 }
 +
A _ {h} u _ {h} ( x _ {i} )  = \
 +
\sum _ {x _ {i+} j \in \Omega _ {h} } a _ {i,j} u _ {h} ( x _ {i+} j ),\ \
 +
x _ {i} \in \Omega _ {h} ,
 +
$$
  
be linear operators mapping <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096080/v09608014.png" /> into <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096080/v09608015.png" />. Among the operators (1) there are such for which the non-zero coefficients <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096080/v09608016.png" /> in (1) correspond only to shift vectors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096080/v09608017.png" /> having <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096080/v09608018.png" />. Such operators are called one-dimensional, acting with respect to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096080/v09608019.png" />, and they are denoted by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096080/v09608020.png" />; similarly, for shift vectors having <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096080/v09608021.png" />, one defines the one-dimensional operators <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096080/v09608022.png" /> acting on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096080/v09608023.png" />.
+
be linear operators mapping $  H _ {h} $
 +
into $  H _ {h} $.  
 +
Among the operators (1) there are such for which the non-zero coefficients $  a _ {i,j} $
 +
in (1) correspond only to shift vectors $  j \equiv ( j _ {1} , j _ {2} ) $
 +
having $  j _ {2} = 0 $.  
 +
Such operators are called one-dimensional, acting with respect to $  x _ {1} $,  
 +
and they are denoted by $  A _ {h,x _ {1}  } $;  
 +
similarly, for shift vectors having $  j _ {1} = 0 $,  
 +
one defines the one-dimensional operators $  A _ {h,x _ {2}  } $
 +
acting on $  x _ {2} $.
  
 
The systems of equations
 
The systems of equations
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096080/v09608024.png" /></td> </tr></table>
+
$$
 +
A _ {h,x _ {r}  } u _ {h}  = g _ {h} ,\ \
 +
r = 1, 2,
 +
$$
  
split up into individual subsystems, each of which links only the values of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096080/v09608025.png" /> at nodes lying on separate horizontal (for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096080/v09608026.png" />) or vertical (for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096080/v09608027.png" />) lines in the grid. The method is characterized by the use of intertwining operators for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096080/v09608028.png" /> of the form
+
split up into individual subsystems, each of which links only the values of $  u _ {h} ( x _ {j} ) $
 +
at nodes lying on separate horizontal (for $  A _ {h,x _ {1}  } $)  
 +
or vertical (for $  A _ {h,x _ {2}  } $)  
 +
lines in the grid. The method is characterized by the use of intertwining operators for $  A _ {h} $
 +
of the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096080/v09608029.png" /></td> </tr></table>
+
$$
 +
R _ {h}  = A _ {h,x _ {1}  } A _ {h,x _ {2}  } .
 +
$$
  
 
The solution to the system
 
The solution to the system
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096080/v09608030.png" /></td> <td valign="top" style="width:5%;text-align:right;">(2)</td></tr></table>
+
$$ \tag{2 }
 +
A _ {h,x _ {1}  } A _ {h,x _ {2}  } u _ {h}  = g _ {h}  $$
  
 
then amounts to the successive solution of the two systems
 
then amounts to the successive solution of the two systems
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096080/v09608031.png" /></td> <td valign="top" style="width:5%;text-align:right;">(3)</td></tr></table>
+
$$ \tag{3 }
 +
A _ {h,x _ {1}  } v _ {h}  = g _ {h} ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096080/v09608032.png" /></td> <td valign="top" style="width:5%;text-align:right;">(4)</td></tr></table>
+
$$ \tag{4 }
 +
A _ {h,x _ {2}  } u _ {h}  = v _ {h} ,
 +
$$
  
in which one first solves the separate subsystems on the horizontal lines in the grid (in the case (3)) and then varies the directions and solves the subsystems on the vertical lines (in the case (4)). Usually, the operators <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096080/v09608033.png" /> are taken such that only <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096080/v09608034.png" /> arithmetic operations are involved in solving (3) and (4), and consequently also (2). Therefore, each iteration in the method, of the form
+
in which one first solves the separate subsystems on the horizontal lines in the grid (in the case (3)) and then varies the directions and solves the subsystems on the vertical lines (in the case (4)). Usually, the operators $  R _ {h} $
 +
are taken such that only $  O( N _ {h} ) $
 +
arithmetic operations are involved in solving (3) and (4), and consequently also (2). Therefore, each iteration in the method, of the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096080/v09608035.png" /></td> <td valign="top" style="width:5%;text-align:right;">(5)</td></tr></table>
+
$$ \tag{5 }
 +
R _ {h}  ^ {(} n) u _ {h}  ^ {(} n+ 1)  = R _ {h}  ^ {(} n) u _ {h}  ^ {(} n) - ( L _ {h} ( u _ {h}  ^ {(} n) ) - f _ {h} ),\ \
 +
n = 0, 1 \dots
 +
$$
  
requires <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096080/v09608036.png" /> arithmetic operations, where the superscript <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096080/v09608037.png" /> corresponds to the number of the iteration.
+
requires $  O( N _ {h} ) $
 +
arithmetic operations, where the superscript $  n $
 +
corresponds to the number of the iteration.
  
The most effective results are obtained for the commutative case, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096080/v09608038.png" /> is a self-adjoint positive-definite operator, while the operators <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096080/v09608039.png" /> are self-adjoint and commute with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096080/v09608040.png" />. In that case, for any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096080/v09608041.png" /> the error in the initial approximation can be reduced in norm by a factor <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096080/v09608042.png" /> in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096080/v09608043.png" /> iterations. The commutative case will be encountered only for boundary value problems in which the variables can be separated, and therefore, the region should be a rectangle. The most common case of the method (5) for the equation
+
The most effective results are obtained for the commutative case, where $  L _ {h} $
 +
is a self-adjoint positive-definite operator, while the operators $  R _ {h}  ^ {(} n) $
 +
are self-adjoint and commute with $  L _ {h} $.  
 +
In that case, for any $  \epsilon > 0 $
 +
the error in the initial approximation can be reduced in norm by a factor $  \epsilon  ^ {-} 1 $
 +
in $  M = O( |  \mathop{\rm ln}  \epsilon |  |  \mathop{\rm ln}  h | ) $
 +
iterations. The commutative case will be encountered only for boundary value problems in which the variables can be separated, and therefore, the region should be a rectangle. The most common case of the method (5) for the equation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096080/v09608044.png" /></td> </tr></table>
+
$$
 +
L _ {h} u _ {h}  = ( \Lambda _ {h,x _ {1}  } + \Lambda _ {h,x _ {2}  } ) u _ {h}  = f _ {h}  $$
  
 
is the method
 
is the method
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096080/v09608045.png" /></td> </tr></table>
+
$$
 +
\left [ \prod _ { r= } 1 ^ { 2 }  ( E _ {h} + \tau _ {r,h}  ^ {(} n) \Lambda _ {h,x _ {r}  } ) \right ] ( u _ {h}  ^ {(} n+ 1) - u _ {h}  ^ {(} n) )  = \
 +
- \gamma _ {h}  ^ {(} n) ( L _ {h} u _ {h}  ^ {(} n) - f _ {h} ),
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/v/v096/v096080/v09608046.png" /> is the identity operator.
+
where $  E _ {n} $
 +
is the identity operator.
  
 
One also uses approaches based on different variational principles, supplementing the approach in which the iteration parameters are chosen to minimize the norm of the operator for passing from the zero-th iteration to an iteration of a given index.
 
One also uses approaches based on different variational principles, supplementing the approach in which the iteration parameters are chosen to minimize the norm of the operator for passing from the zero-th iteration to an iteration of a given index.

Revision as of 08:27, 6 June 2020


An iterative method (cf. Iteration methods) for solving systems of linear or non-linear equations arising in difference or projection-difference methods for the approximate solution of, for example, boundary value problems for partial differential equations of elliptic type.

Let, for example, there be two spatial variables and a sequence of square grids $ \omega _ {h} $ with step $ h > 0 $ and nodes $ x _ {i} \equiv ( i _ {1} h , i _ {2} h) $, where $ i \equiv ( i _ {1} , i _ {2} ) $ is a vector with integer components. Let $ \Omega _ {h} $ be the set of nodes $ \omega _ {h} $ at which one seeks a solution to a difference or projection-difference problem written in the form of the operator equation

$$ L _ {h} ( u _ {h} ) = f _ {h} $$

in the Euclidean space $ H _ {h} $. The latter can be identified with the space of functions given at the nodes of $ \Omega _ {h} $; the dimension of $ H _ {h} $ coincides with the number of points $ N _ {h} $ from $ \Omega _ {h} $.

Let

$$ \tag{1 } A _ {h} u _ {h} ( x _ {i} ) = \ \sum _ {x _ {i+} j \in \Omega _ {h} } a _ {i,j} u _ {h} ( x _ {i+} j ),\ \ x _ {i} \in \Omega _ {h} , $$

be linear operators mapping $ H _ {h} $ into $ H _ {h} $. Among the operators (1) there are such for which the non-zero coefficients $ a _ {i,j} $ in (1) correspond only to shift vectors $ j \equiv ( j _ {1} , j _ {2} ) $ having $ j _ {2} = 0 $. Such operators are called one-dimensional, acting with respect to $ x _ {1} $, and they are denoted by $ A _ {h,x _ {1} } $; similarly, for shift vectors having $ j _ {1} = 0 $, one defines the one-dimensional operators $ A _ {h,x _ {2} } $ acting on $ x _ {2} $.

The systems of equations

$$ A _ {h,x _ {r} } u _ {h} = g _ {h} ,\ \ r = 1, 2, $$

split up into individual subsystems, each of which links only the values of $ u _ {h} ( x _ {j} ) $ at nodes lying on separate horizontal (for $ A _ {h,x _ {1} } $) or vertical (for $ A _ {h,x _ {2} } $) lines in the grid. The method is characterized by the use of intertwining operators for $ A _ {h} $ of the form

$$ R _ {h} = A _ {h,x _ {1} } A _ {h,x _ {2} } . $$

The solution to the system

$$ \tag{2 } A _ {h,x _ {1} } A _ {h,x _ {2} } u _ {h} = g _ {h} $$

then amounts to the successive solution of the two systems

$$ \tag{3 } A _ {h,x _ {1} } v _ {h} = g _ {h} , $$

$$ \tag{4 } A _ {h,x _ {2} } u _ {h} = v _ {h} , $$

in which one first solves the separate subsystems on the horizontal lines in the grid (in the case (3)) and then varies the directions and solves the subsystems on the vertical lines (in the case (4)). Usually, the operators $ R _ {h} $ are taken such that only $ O( N _ {h} ) $ arithmetic operations are involved in solving (3) and (4), and consequently also (2). Therefore, each iteration in the method, of the form

$$ \tag{5 } R _ {h} ^ {(} n) u _ {h} ^ {(} n+ 1) = R _ {h} ^ {(} n) u _ {h} ^ {(} n) - ( L _ {h} ( u _ {h} ^ {(} n) ) - f _ {h} ),\ \ n = 0, 1 \dots $$

requires $ O( N _ {h} ) $ arithmetic operations, where the superscript $ n $ corresponds to the number of the iteration.

The most effective results are obtained for the commutative case, where $ L _ {h} $ is a self-adjoint positive-definite operator, while the operators $ R _ {h} ^ {(} n) $ are self-adjoint and commute with $ L _ {h} $. In that case, for any $ \epsilon > 0 $ the error in the initial approximation can be reduced in norm by a factor $ \epsilon ^ {-} 1 $ in $ M = O( | \mathop{\rm ln} \epsilon | | \mathop{\rm ln} h | ) $ iterations. The commutative case will be encountered only for boundary value problems in which the variables can be separated, and therefore, the region should be a rectangle. The most common case of the method (5) for the equation

$$ L _ {h} u _ {h} = ( \Lambda _ {h,x _ {1} } + \Lambda _ {h,x _ {2} } ) u _ {h} = f _ {h} $$

is the method

$$ \left [ \prod _ { r= } 1 ^ { 2 } ( E _ {h} + \tau _ {r,h} ^ {(} n) \Lambda _ {h,x _ {r} } ) \right ] ( u _ {h} ^ {(} n+ 1) - u _ {h} ^ {(} n) ) = \ - \gamma _ {h} ^ {(} n) ( L _ {h} u _ {h} ^ {(} n) - f _ {h} ), $$

where $ E _ {n} $ is the identity operator.

One also uses approaches based on different variational principles, supplementing the approach in which the iteration parameters are chosen to minimize the norm of the operator for passing from the zero-th iteration to an iteration of a given index.

The method is frequently used as an internal iterative process in two-step iteration methods based on operators that are equivalent in spectrum and suitable for use with variable coefficients and non-linear problems.

References

[1] D.W. Peaceman, H.H. Rachford, "The numerical solution of parabolic and elliptic differential equations" SIAM J. , 3 : 1 (1955) pp. 28–41
[2] E.G. D'yakonov, "Iterative methods for solving difference analogues of boundary value problems for equations of elliptic type" , Kiev (1970) (In Russian)
[3] N.N. Yanenko, "The method of fractional steps; the solution of problems of mathematical physics in several variables" , Springer (1971) (Translated from Russian)
[4] A.A. Samarskii, "Theorie der Differenzverfahren" , Akad. Verlagsgesell. Geest u. Portig K.-D. (1984) (Translated from Russian)
[5] G.I. Marchuk, "Methods of numerical mathematics" , Springer (1982) (Translated from Russian)
How to Cite This Entry:
Variable-directions method. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Variable-directions_method&oldid=49111
This article was adapted from an original article by E.G. D'yakonov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article