Namespaces
Variants
Actions

Difference between revisions of "Boundary value problem, numerical methods for partial differential equations"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (fixing dots)
 
(3 intermediate revisions by 2 users not shown)
Line 1: Line 1:
 +
<!--
 +
b0173601.png
 +
$#A+1 = 69 n = 0
 +
$#C+1 = 69 : ~/encyclopedia/old_files/data/B017/B.0107360 Boundary value problem, numerical methods for partial differential equations
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
Approximate methods of solution which yield the solution to the problem in the form of a numerical table. An exact construction of a solution (in terms of explicit formulas, series, etc.) to a boundary value problem is only rarely possible. The most prevalent of the methods of approximate solution are difference methods (see [[#References|[1]]]); they are applied in the most general problems and are readily computerized. The essential idea of difference methods is to replace the original domain of variation of the independent variables by a discrete set of points, a grid, and to approximate the derivatives figuring in the equation and the boundary conditions by difference quotients at the grid points. As a result of this procedure the original problem is replaced by a system of finitely many algebraic equations (linear or non-linear); this system is known as a difference scheme. The solution of the difference scheme is taken to be an approximate solution of the original problem. The accuracy of the approximation depends on the method of approximation and on the fineness of the grid, i.e. on how densely the grid points fill the original domain. In the rest of this article attention will be restricted to linear boundary value problems for partial differential equations, and it will be assumed that the problem in question is well-posed. In order to justify difference methods one must investigate the well-posedness of the difference problem and its convergence as the grid is refined. A difference problem is said to be well-posed if it has a unique stable solution, whatever the terms on the right-hand sides of the equations. By stability of a difference scheme one means that its solution depends continuously on the right-hand side, uniformly with respect to the grid spacing.
 
Approximate methods of solution which yield the solution to the problem in the form of a numerical table. An exact construction of a solution (in terms of explicit formulas, series, etc.) to a boundary value problem is only rarely possible. The most prevalent of the methods of approximate solution are difference methods (see [[#References|[1]]]); they are applied in the most general problems and are readily computerized. The essential idea of difference methods is to replace the original domain of variation of the independent variables by a discrete set of points, a grid, and to approximate the derivatives figuring in the equation and the boundary conditions by difference quotients at the grid points. As a result of this procedure the original problem is replaced by a system of finitely many algebraic equations (linear or non-linear); this system is known as a difference scheme. The solution of the difference scheme is taken to be an approximate solution of the original problem. The accuracy of the approximation depends on the method of approximation and on the fineness of the grid, i.e. on how densely the grid points fill the original domain. In the rest of this article attention will be restricted to linear boundary value problems for partial differential equations, and it will be assumed that the problem in question is well-posed. In order to justify difference methods one must investigate the well-posedness of the difference problem and its convergence as the grid is refined. A difference problem is said to be well-posed if it has a unique stable solution, whatever the terms on the right-hand sides of the equations. By stability of a difference scheme one means that its solution depends continuously on the right-hand side, uniformly with respect to the grid spacing.
  
For example, suppose one wishes to solve the Dirichlet problem for the Poisson equation in the square <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b0173601.png" /> with boundary <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b0173602.png" />:
+
For example, suppose one wishes to solve the Dirichlet problem for the Poisson equation in the square $  G = \{ 0 < x _ {1} , x _ {2} < 1 \} $
 +
with boundary $  \Gamma $:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b0173603.png" /></td> </tr></table>
+
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b0173604.png" /></td> </tr></table>
+
\frac{\partial  ^ {2} u }{\partial  x _ {1}  ^ {2} }
 +
+
  
The domain <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b0173605.png" /> is replaced by a square grid <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b0173606.png" /> with spacing <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b0173607.png" />, i.e. by the set of points
+
\frac{\partial  ^ {2} u }{\partial  x _ {2}  ^ {2} }
 +
  = \
 +
- f (x _ {1} , x _ {2} ),\ \
 +
(x _ {1} , x _ {2} ) \in G,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b0173608.png" /></td> </tr></table>
+
$$
 +
u (x _ {1} , x _ {2} )  = \mu (x _ {1} ,\
 +
x _ {2} ),\  (x _ {1} , x _ {2} ) \in \Gamma .
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b0173609.png" /></td> </tr></table>
+
The domain  $  G $
 +
is replaced by a square grid  $  G _ {h} $
 +
with spacing  $  h $,
 +
i.e. by the set of points
 +
 
 +
$$
 +
G _ {h}  = \
 +
\{
 +
( x _ {1}  ^ {(i)} , x _ {2}  ^ {(j)}
 +
) : x _ {1}  ^ {(i)} = ih,\
 +
x _ {2}  ^ {(j)} = jh;
 +
$$
 +
 
 +
$$
 +
{} i, j = 1, \dots, N - 1  \} ,\  hN = 1,
 +
$$
  
 
with boundary
 
with boundary
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b01736010.png" /></td> </tr></table>
+
$$
 +
\Gamma _ {h}  = \
 +
\bigcup _ {i, j = 1 } ^ { {N }  - 1 }
 +
\{ ( 0, x _ {2}  ^ {(j)} ) \cup ( 1, x _ {2}  ^ {(j)}
 +
) \cup ( x _ {1}  ^ {(i)} , 0 ) \cup ( x _ {1}  ^ {(i)} , 1
 +
) \} ,
 +
$$
  
 
and the derivatives figuring in the equation by difference quotients
 
and the derivatives figuring in the equation by difference quotients
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b01736011.png" /></td> </tr></table>
+
$$
 +
( \Lambda _ {1} u) _ {i, j }  = \
 +
u _ {\overline{x} _ {1}  x _ {1} , i, j }  = \
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b01736012.png" /></td> </tr></table>
+
\frac{(u _ {i + 1, j }  - 2u _ {i, j }  +
 +
u _ {i - 1, j }  ) }{h  ^ {2} }
 +
,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b01736013.png" />. The difference scheme is
+
$$
 +
( \Lambda _ {2} u) _ {i, j }  = u _ {\overline{x} _ {2}  x _ {2} , i, j }  =
 +
\frac{(u _ {i, j + 1 }  - 2u _ {i, j }  + u _ {i, j - 1 }  ) }{h  ^ {2} }
 +
,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b01736014.png" /></td> <td valign="top" style="width:5%;text-align:right;">(1)</td></tr></table>
+
where  $  u _ {i, j }  = u (x _ {1}  ^ {(i)} , x _ {2}  ^ {(j)} ) $.
 +
The difference scheme is
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b01736015.png" /></td> </tr></table>
+
$$ \tag{1 }
 +
( \Lambda _ {h} y) _ {i, j }  = \
 +
y _ {\overline{x} _ {1}  x _ {1} , i, j } +
 +
y _ {\overline{x} _ {2}  x _ {2} , i, j }  = \
 +
- f _ {i, j }  ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b01736016.png" /> is the solution.
+
$$
 +
i, j = 1, \dots, N - 1; \  \left . y _ {i, j
 +
}  \right | _ {\Gamma _ {h}  }  = \mu _ {i, j }  ,
 +
$$
  
Problem (1) has a — unique — solution for any inhomogeneous term <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b01736017.png" /> and any boundary conditions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b01736018.png" /> (see [[#References|[2]]]). Moreover, the solution of the difference scheme (1) converges as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b01736019.png" /> to the solution of the original problem in such a way that the scheme is a second-order approximation in the maximum norm, i.e.
+
where  $  y _ {i, j }  $
 +
is the solution.
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b01736020.png" /></td> </tr></table>
+
Problem (1) has a — unique — solution for any inhomogeneous term  $  f $
 +
and any boundary conditions  $  \mu $(
 +
see [[#References|[2]]]). Moreover, the solution of the difference scheme (1) converges as  $  h \rightarrow 0 $
 +
to the solution of the original problem in such a way that the scheme is a second-order approximation in the maximum norm, i.e.
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b01736021.png" /> is a constant independent of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b01736022.png" />.
+
$$
 +
\max _ {(x _ {1} , x _ {2} ) \in G _ {h} }
 +
| y (x _ {1} , x _ {2} ) -
 +
u (x _ {1} , x _ {2} ) |
 +
\leq  Mh  ^ {2} ,
 +
$$
  
The difference scheme (1) is a system of linear algebraic equations, characteristically with a large number of equations (to be precise: <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b01736023.png" /> equations, with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b01736024.png" /> as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b01736025.png" />) and a large number of zeros in the matrix of the system; moreover, the system is generally ill-conditioned (the ratio of the minimum eigen value to the maximum one is of the same order of magnitude as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b01736026.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b01736027.png" />). Such systems of equations, which arise when differential equations are replaced by difference equations, may be solved using various efficient methods, both direct and iterative. Direct methods yield an exact solution of the difference equation after a finite number of arithmetic operations. This category includes various versions of the double-sweep method, including matrix double-sweep, the decomposition method, the fast Fourier transform, and the method of representations by sums (see [[#References|[1]]], [[#References|[2]]], [[#References|[3]]], [[#References|[6]]]). A measure of the efficiency of direct methods is the order of magnitude of the number of operations as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b01736028.png" />. Thus, solving problem (1) by matrix double-sweep requires <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b01736029.png" /> operations, whereas the fast Fourier transform method needs — for the same problem — <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b01736030.png" /> operations. The most widely used iterative methods for the solution of difference problems are Richardson's method with Chebyshev interpolation points, the alternating-triangular iterative method, and various alternating-direction methods (see [[#References|[2]]]). The efficiency of iterative methods is measured by the order of magnitude of the minimum number of iterations, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b01736031.png" />, necessary to decrease the error of the initial approximation by a factor <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b01736032.png" />. For example, in solving problem (1) by Richardson's method, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b01736033.png" /> is of the order <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b01736034.png" />, while the alternating-direction method with optimally-selected iteration parameters has <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b01736035.png" />. Iterative methods are more universal and simpler to implement than direct methods, and consequently have been widely utilized to solve difference problems.
+
where  $  M $
 +
is a constant independent of  $  h $.
 +
 
 +
The difference scheme (1) is a system of linear algebraic equations, characteristically with a large number of equations (to be precise: $  (N - 1)  ^ {2} $
 +
equations, with $  N \rightarrow \infty $
 +
as $  h \rightarrow 0 $)  
 +
and a large number of zeros in the matrix of the system; moreover, the system is generally ill-conditioned (the ratio of the minimum eigen value to the maximum one is of the same order of magnitude as $  h  ^ {2} $,  
 +
$  h \rightarrow 0 $).  
 +
Such systems of equations, which arise when differential equations are replaced by difference equations, may be solved using various efficient methods, both direct and iterative. Direct methods yield an exact solution of the difference equation after a finite number of arithmetic operations. This category includes various versions of the double-sweep method, including matrix double-sweep, the decomposition method, the fast Fourier transform, and the method of representations by sums (see [[#References|[1]]], [[#References|[2]]], [[#References|[3]]], [[#References|[6]]]). A measure of the efficiency of direct methods is the order of magnitude of the number of operations as $  h \rightarrow 0 $.  
 +
Thus, solving problem (1) by matrix double-sweep requires $  O (h  ^ {-4} ) $
 +
operations, whereas the fast Fourier transform method needs — for the same problem — $  O (h  ^ {-2}  \log  h  ^ {-1} ) $
 +
operations. The most widely used iterative methods for the solution of difference problems are Richardson's method with Chebyshev interpolation points, the alternating-triangular iterative method, and various alternating-direction methods (see [[#References|[2]]]). The efficiency of iterative methods is measured by the order of magnitude of the minimum number of iterations, $  n _ {0} ( \epsilon ) $,  
 +
necessary to decrease the error of the initial approximation by a factor $  1/ \epsilon $.  
 +
For example, in solving problem (1) by Richardson's method, $  n _ {0} ( \epsilon ) $
 +
is of the order $  h  ^ {-1}  \log (1/ \epsilon ) $,  
 +
while the alternating-direction method with optimally-selected iteration parameters has $  n _ {0} ( \epsilon ) = O ( \log  h  ^ {-1}  \log ( 1/ \epsilon )) $.  
 +
Iterative methods are more universal and simpler to implement than direct methods, and consequently have been widely utilized to solve difference problems.
  
 
For example, consider solving the first boundary value problem for the heat equation:
 
For example, consider solving the first boundary value problem for the heat equation:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b01736036.png" /></td> <td valign="top" style="width:5%;text-align:right;">(2)</td></tr></table>
+
$$ \tag{2 }
 +
\left .
 +
 
 +
\begin{array}{ll}
 +
 
 +
\frac{\partial  u }{\partial  t }
 +
  = \
 +
 
 +
\frac{\partial  ^ {2} u }{\partial  x _ {1}  ^ {2} }
 +
+
 +
 
 +
\frac{\partial  ^ {2} u }{\partial  x _ {2}  ^ {2} }
 +
,  & t > 0,\  (x _ {1} , x _ {2} ) \in G  \\
 +
u (x _ {1} , x _ {2} , 0)  = \
 +
u _ {0} (x _ {1} , x _ {2} ); &{}  \\
 +
u (x _ {1} , x _ {2} , t)  = 0,  & (x _ {1} , x _ {2} ) \in \Gamma .  \\
 +
\end{array}
 +
\
 +
\right \}
 +
$$
 +
 
 +
To solve this problem, one prescribes a time grid with spacing  $  \tau > 0 $:
  
To solve this problem, one prescribes a time grid with spacing <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b01736037.png" />:
+
$$
 +
\omega _  \tau  = \
 +
\{ {t _ {n} = n \tau } : {
 +
n = 0, 1 ,\dots } \}
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b01736038.png" /></td> </tr></table>
+
and the grid  $  G _ {h} $(
 +
see above) for the space variables. Let  $  u _ {i, j }  ^ {n} = u (x _ {1}  ^ {(i)} , x _ {2}  ^ {(j)} , t _ {n} ) $.  
 +
The derivative  $  u _ {t}  ^  \prime  $
 +
is approximated by the quotient
  
and the grid <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b01736039.png" /> (see above) for the space variables. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b01736040.png" />. The derivative <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b01736041.png" /> is approximated by the quotient
+
$$
 +
u _ {t, i, j }  = \
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b01736042.png" /></td> </tr></table>
+
\frac{u _ {i, j }  ^ {n + 1 } - u _ {i, j }  ^ {n} } \tau
  
and the Laplacian <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b01736043.png" /> by the difference operator <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b01736044.png" />. The original equation (2) is replaced by a corresponding difference scheme
+
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b01736045.png" /></td> <td valign="top" style="width:5%;text-align:right;">(3)</td></tr></table>
+
and the Laplacian  $  \Delta $
 +
by the difference operator  $  \Delta _ {h} $.  
 +
The original equation (2) is replaced by a corresponding difference scheme
  
The parameter <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b01736046.png" /> appearing in these equations determines the stability and accuracy of the scheme. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b01736047.png" />, the scheme is stable for any grid spacings <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b01736048.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b01736049.png" /> (an absolutely-stable difference scheme). But if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b01736050.png" />, the scheme is stable only if some restriction is imposed on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b01736051.png" /> (a conditionally-stable difference scheme); for example, the explicit scheme (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b01736052.png" />) is stable if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b01736053.png" />. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b01736054.png" />, the scheme is second-order accurate with respect to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b01736055.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b01736056.png" />; for other values of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b01736057.png" /> it is first-order accurate in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b01736058.png" /> and second-order accurate in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b01736059.png" />. The difference problem (3) is solved  "in layers" . The <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b01736060.png" />-th layer is the set of all points of the grid <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b01736061.png" /> for some fixed <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b01736062.png" />. The values of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b01736063.png" /> on the zero-th layer (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b01736064.png" />) are known from the initial conditions. If the values on the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b01736065.png" />-th layer are known, then the values on the next layer <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b01736066.png" /> are determined from the system of equations
+
$$ \tag{3 }
 +
\left .
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b01736067.png" /></td> <td valign="top" style="width:5%;text-align:right;">(4)</td></tr></table>
+
\begin{array}{l}
 +
y _ {t, i, j }  = \
 +
\sigma ( \Delta _ {h} y) _ {i, j }  ^ {n + 1 } +
 +
(1 - \sigma ) ( \Delta _ {h} y) _ {i, j }  ^ {n} ; \\
 +
y _ {i, j }  ^ {0}  = \
 +
u _ {0} ( x _ {1}  ^ {(i)} , x _ {2}  ^ {(j)} ) ,\ \
 +
i, j = 1, \dots, N - 1; \\
 +
\left . y _ {i,j}  ^ {n} \right | _ {\Gamma _ {h}  \times \omega _  \tau  }
 +
= 0. \\
 +
\end{array}
 +
\
 +
\right \}
 +
$$
 +
 
 +
The parameter  $  \sigma $
 +
appearing in these equations determines the stability and accuracy of the scheme. If  $  \sigma \geq  0.5 $,
 +
the scheme is stable for any grid spacings  $  \tau $
 +
and  $  h $(
 +
an absolutely-stable difference scheme). But if  $  \sigma < 0.5 $,
 +
the scheme is stable only if some restriction is imposed on  $  \gamma = \tau /h  ^ {2} $(
 +
a conditionally-stable difference scheme); for example, the explicit scheme ( $  \sigma = 0 $)
 +
is stable if  $  \gamma \leq  1/4 $.
 +
If  $  \sigma = 0.5 $,
 +
the scheme is second-order accurate with respect to  $  \tau $
 +
and  $  h $;  
 +
for other values of  $  \sigma $
 +
it is first-order accurate in  $  \tau $
 +
and second-order accurate in  $  h $.
 +
The difference problem (3) is solved "in layers". The  $  n $-th layer is the set of all points of the grid  $  G _ {h} \times \omega _  \tau  $
 +
for some fixed  $  n $.
 +
The values of  $  y _ {i, j }  ^ {0} $
 +
on the zero-th layer ( $  n = 0 $)
 +
are known from the initial conditions. If the values on the  $  n $-th layer are known, then the values on the next layer  $  y _ {i, j }  = y _ {i, j }  ^ {n + 1 } $
 +
are determined from the system of equations
 +
 
 +
$$ \tag{4 }
 +
\left .
 +
 
 +
\begin{array}{l}
 +
y _ {i, j }  -
 +
\sigma \tau ( \Delta _ {h} y) _ {i, j }  = \
 +
f _ {i, j }  ^ {n} ,\ \
 +
i, j = 1, \dots, N - 1,  \\
 +
\left . y _ {i, j }  \right | _ {\Gamma _ {h}  }  =  0,  \\
 +
\end{array}
 +
\
 +
\right \}
 +
$$
  
 
where
 
where
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b01736068.png" /></td> </tr></table>
+
$$
 +
f _ {i, j }  ^ {n}  = \
 +
y _ {i, j }  ^ {n} +
 +
(1 - \sigma ) \tau
 +
( \Delta _ {h} y) _ {i, j }  ^ {n} .
 +
$$
  
 
The solution of problem (4) may be found by any of the methods for solving the stationary problem (1). However, there are more economical algorithms for the solution of multi-dimensional non-stationary boundary value problems, namely alternating-direction methods (see [[#References|[1]]]–[[#References|[5]]]), which enables one to reduce the solution of a multi-dimensional problem to that of a sequence of one-dimensional problems. Thus, the heat equation may be solved using the following alternating-direction scheme:
 
The solution of problem (4) may be found by any of the methods for solving the stationary problem (1). However, there are more economical algorithms for the solution of multi-dimensional non-stationary boundary value problems, namely alternating-direction methods (see [[#References|[1]]]–[[#References|[5]]]), which enables one to reduce the solution of a multi-dimensional problem to that of a sequence of one-dimensional problems. Thus, the heat equation may be solved using the following alternating-direction scheme:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b017/b017360/b01736069.png" /></td> </tr></table>
+
$$
 +
\left .
 +
 
 +
\begin{array}{l}
 +
 
 +
\frac{y _ {i, j }  ^ {n + 1/2 } -
 +
y _ {i, j }  ^ {n} }{0.5 \tau }
 +
  = \
 +
\Lambda _ {1} y _ {i, j }  ^ {n + 1/2 } +
 +
\Lambda _ {2} y _ {i, j }  ^ {n} ,  \\
 +
 
 +
\frac{y _ {i, j }  ^ {n + 1 } -
 +
y _ {i, j }  ^ {n + 1/2 } }{0.5 \tau }
 +
  = \
 +
\Lambda _ {1} y _ {i, j }  ^ {n + 1/2 } +
 +
\Lambda _ {2} y _ {i, j }  ^ {n + 1 } . \\
 +
\end{array}
 +
\
 +
\right \}
 +
$$
  
 
This scheme is absolutely stable, second-order accurate and is solved by successive inversion of one-dimensional difference operators.
 
This scheme is absolutely stable, second-order accurate and is solved by successive inversion of one-dimensional difference operators.
  
Even when the difference problem is solved exactly, the solution may well be different — not only quantitatively but also qualitatively — from the solution of the original differential problem. The discrepancy is particularly prominent in dealing with equations the coefficients — or solutions — of which involve singularities (thus, in computing discontinuous solutions of the equations of gas dynamics one usually encounters zones in which the discontinuities are strongly "spread out" ). Thus, direct approximation of a differential problem by a difference problem, when the derivatives are replaced — with a considerable degree of arbitrariness — by difference quotients, does not always yield a good difference scheme. Various principles have been developed for the construction of difference schemes of good quality. Among the successful methods are the balance method and the method of variational-functional approximation (see [[#References|[1]]]–[[#References|[3]]]). The difference schemes obtained in these methods correctly reflect the integral laws of conservation which are valid for the original equations, and ensure that the relevant difference operators have fixed signs. In the theory of homogeneous difference schemes (see [[#References|[7]]]) one considers questions of constructing difference schemes for equations with variable (including discontinuous) coefficients and investigating their convergence.
+
Even when the difference problem is solved exactly, the solution may well be different — not only quantitatively but also qualitatively — from the solution of the original differential problem. The discrepancy is particularly prominent in dealing with equations the coefficients — or solutions — of which involve singularities (thus, in computing discontinuous solutions of the equations of gas dynamics one usually encounters zones in which the discontinuities are strongly "spread out" ). Thus, direct approximation of a differential problem by a difference problem, when the derivatives are replaced — with a considerable degree of arbitrariness — by difference quotients, does not always yield a good difference scheme. Various principles have been developed for the construction of difference schemes of good quality. Among the successful methods are the balance method and the method of variational-functional approximation (see [[#References|[1]]]–[[#References|[3]]]). The difference schemes obtained in these methods correctly reflect the integral laws of conservation which are valid for the original equations, and ensure that the relevant difference operators have fixed signs. In the theory of homogeneous difference schemes (see [[#References|[7]]]) one considers questions of constructing difference schemes for equations with variable (including discontinuous) coefficients and investigating their convergence.
  
 
====References====
 
====References====
<table><TR><TD valign="top">[1]</TD> <TD valign="top"> A.N. Tikhonov,   A.A. Samarskii,   "Equations of mathematical physics" , Pergamon (1963) (Translated from Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top"> A.A. Samarskii,   G.I. Marchuk,   "The theory of difference schemes" , Moscow (1983) (In Russian)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top"> G.I. Marchuk,   "Methods of numerical mathematics" , Springer (1982) (Translated from Russian)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top"> N.N. Yanenko,   "The method of fractional steps: solution of problems of mathematical physics in several variables" , Springer (1971) (Translated from Russian)</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top"> E.G. D'yakonov,   "Difference methods for solving boundary value problems" , '''1–2''' , Moscow (1971–1972) (In Russian)</TD></TR><TR><TD valign="top">[6]</TD> <TD valign="top"> G.N. Polozhii,   "Numerical solution of two- and three-dimensional boundary value problems of mathematical physics and functions of a discrete argument" , Kiev (1962) (In Russian)</TD></TR><TR><TD valign="top">[7]</TD> <TD valign="top"> A.N. Tikhonov,   A.A. Samarskii,   "On homogeneous difference schemes" ''Zh. Vychisl. Mat. i Mat. Fiz.'' , '''1''' : 1 (1961) pp. 5–63 (In Russian)</TD></TR></table>
+
<table><TR><TD valign="top">[1]</TD> <TD valign="top"> A.N. Tikhonov, A.A. Samarskii, "Equations of mathematical physics" , Pergamon (1963) (Translated from Russian) {{MR|0165209}} {{ZBL|0111.29008}} </TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top"> A.A. Samarskii, G.I. Marchuk, "The theory of difference schemes" , Moscow (1983) (In Russian) {{MR|1818323}} {{MR|0514784}} {{MR|0378439}} {{MR|0347102}} {{MR|0438725}} {{MR|0207234}} {{MR|0203968}} {{ZBL|0971.65076}} {{ZBL|1090.65500}} {{ZBL|0368.65032}} {{ZBL|0359.65083}} {{ZBL|0155.43603}} </TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top"> G.I. Marchuk, "Methods of numerical mathematics" , Springer (1982) (Translated from Russian) {{MR|0661258}} {{ZBL|0485.65003}} </TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top"> N.N. Yanenko, "The method of fractional steps: solution of problems of mathematical physics in several variables" , Springer (1971) (Translated from Russian) {{MR|307493}} {{ZBL|}} </TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top"> E.G. D'yakonov, "Difference methods for solving boundary value problems" , '''1–2''' , Moscow (1971–1972) (In Russian) {{MR|}} {{ZBL|0434.65078}} {{ZBL|0308.65051}} </TD></TR><TR><TD valign="top">[6]</TD> <TD valign="top"> G.N. Polozhii, "Numerical solution of two- and three-dimensional boundary value problems of mathematical physics and functions of a discrete argument" , Kiev (1962) (In Russian)</TD></TR><TR><TD valign="top">[7]</TD> <TD valign="top"> A.N. Tikhonov, A.A. Samarskii, "On homogeneous difference schemes" ''Zh. Vychisl. Mat. i Mat. Fiz.'' , '''1''' : 1 (1961) pp. 5–63 (In Russian) {{MR|168127}} {{ZBL|0125.07303}} </TD></TR></table>
 
 
 
 
  
 
====Comments====
 
====Comments====
Line 80: Line 266:
  
 
====References====
 
====References====
<table><TR><TD valign="top">[a1]</TD> <TD valign="top"> G.E. Forsythe,   W.R. Wasow,   "Finite difference methods for partial differential equations" , Wiley (1960)</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top"> P.R. Garabedian,   "Partial differential equations" , Wiley (1964)</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top"> I. Gladwell (ed.) R. Wait (ed.) , ''A survey of numerical methods for partial differential equations'' , Clarendon Press (1979)</TD></TR><TR><TD valign="top">[a4]</TD> <TD valign="top"> W.F. Griffiths,   "The finite difference method in partial differential equations" , Wiley (1980)</TD></TR><TR><TD valign="top">[a5]</TD> <TD valign="top"> R.D. Richtmeyer,   K.W. Morton,   "Difference methods for initial value problems" , Wiley (1967)</TD></TR><TR><TD valign="top">[a6]</TD> <TD valign="top"> A.A. Samarskii,   "Theorie der Differenzverfahren" , Akad. Verlagsgesell. Geest u. Portig K.-D. (1984) (Translated from Russian)</TD></TR><TR><TD valign="top">[a7]</TD> <TD valign="top"> G.D. Smith,   "Numerical solution of partial differential equations" , Oxford Univ. Press (1977)</TD></TR><TR><TD valign="top">[a8]</TD> <TD valign="top"> N.N. Yanenko,   "The method of fractional steps; the solution of problems of mathematical physics in several variables" , Springer (1971) (Translated from Russian)</TD></TR></table>
+
<table><TR><TD valign="top">[a1]</TD> <TD valign="top"> G.E. Forsythe, W.R. Wasow, "Finite difference methods for partial differential equations" , Wiley (1960) {{MR|0130124}} {{ZBL|0099.11103}} </TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top"> P.R. Garabedian, "Partial differential equations" , Wiley (1964) {{MR|0162045}} {{ZBL|0124.30501}} </TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top"> I. Gladwell (ed.) R. Wait (ed.) , ''A survey of numerical methods for partial differential equations'' , Clarendon Press (1979) {{MR|0569444}} {{ZBL|0417.65047}} </TD></TR><TR><TD valign="top">[a4]</TD> <TD valign="top"> W.F. Griffiths, "The finite difference method in partial differential equations" , Wiley (1980) {{MR|0562915}} {{ZBL|0417.65048}} </TD></TR><TR><TD valign="top">[a5]</TD> <TD valign="top"> R.D. Richtmeyer, K.W. Morton, "Difference methods for initial value problems" , Wiley (1967) {{MR|220455}} {{ZBL|}} </TD></TR><TR><TD valign="top">[a6]</TD> <TD valign="top"> A.A. Samarskii, "Theorie der Differenzverfahren" , Akad. Verlagsgesell. Geest u. Portig K.-D. (1984) (Translated from Russian)</TD></TR><TR><TD valign="top">[a7]</TD> <TD valign="top"> G.D. Smith, "Numerical solution of partial differential equations" , Oxford Univ. Press (1977) {{MR|0827497}} {{MR|0509636}} {{MR|0391467}} {{MR|1534099}} {{MR|0282538}} {{ZBL|0576.65089}} {{ZBL|0389.65040}} {{ZBL|0123.11806}} </TD></TR><TR><TD valign="top">[a8]</TD> <TD valign="top"> N.N. Yanenko, "The method of fractional steps; the solution of problems of mathematical physics in several variables" , Springer (1971) (Translated from Russian) {{MR|0307493}} {{ZBL|0356.65109}} {{ZBL|0209.47103}} {{ZBL|0183.18201}} {{ZBL|0099.33502}} </TD></TR></table>

Latest revision as of 05:17, 14 September 2022


Approximate methods of solution which yield the solution to the problem in the form of a numerical table. An exact construction of a solution (in terms of explicit formulas, series, etc.) to a boundary value problem is only rarely possible. The most prevalent of the methods of approximate solution are difference methods (see [1]); they are applied in the most general problems and are readily computerized. The essential idea of difference methods is to replace the original domain of variation of the independent variables by a discrete set of points, a grid, and to approximate the derivatives figuring in the equation and the boundary conditions by difference quotients at the grid points. As a result of this procedure the original problem is replaced by a system of finitely many algebraic equations (linear or non-linear); this system is known as a difference scheme. The solution of the difference scheme is taken to be an approximate solution of the original problem. The accuracy of the approximation depends on the method of approximation and on the fineness of the grid, i.e. on how densely the grid points fill the original domain. In the rest of this article attention will be restricted to linear boundary value problems for partial differential equations, and it will be assumed that the problem in question is well-posed. In order to justify difference methods one must investigate the well-posedness of the difference problem and its convergence as the grid is refined. A difference problem is said to be well-posed if it has a unique stable solution, whatever the terms on the right-hand sides of the equations. By stability of a difference scheme one means that its solution depends continuously on the right-hand side, uniformly with respect to the grid spacing.

For example, suppose one wishes to solve the Dirichlet problem for the Poisson equation in the square $ G = \{ 0 < x _ {1} , x _ {2} < 1 \} $ with boundary $ \Gamma $:

$$ \frac{\partial ^ {2} u }{\partial x _ {1} ^ {2} } + \frac{\partial ^ {2} u }{\partial x _ {2} ^ {2} } = \ - f (x _ {1} , x _ {2} ),\ \ (x _ {1} , x _ {2} ) \in G, $$

$$ u (x _ {1} , x _ {2} ) = \mu (x _ {1} ,\ x _ {2} ),\ (x _ {1} , x _ {2} ) \in \Gamma . $$

The domain $ G $ is replaced by a square grid $ G _ {h} $ with spacing $ h $, i.e. by the set of points

$$ G _ {h} = \ \{ ( x _ {1} ^ {(i)} , x _ {2} ^ {(j)} ) : x _ {1} ^ {(i)} = ih,\ x _ {2} ^ {(j)} = jh; $$

$$ {} i, j = 1, \dots, N - 1 \} ,\ hN = 1, $$

with boundary

$$ \Gamma _ {h} = \ \bigcup _ {i, j = 1 } ^ { {N } - 1 } \{ ( 0, x _ {2} ^ {(j)} ) \cup ( 1, x _ {2} ^ {(j)} ) \cup ( x _ {1} ^ {(i)} , 0 ) \cup ( x _ {1} ^ {(i)} , 1 ) \} , $$

and the derivatives figuring in the equation by difference quotients

$$ ( \Lambda _ {1} u) _ {i, j } = \ u _ {\overline{x} _ {1} x _ {1} , i, j } = \ \frac{(u _ {i + 1, j } - 2u _ {i, j } + u _ {i - 1, j } ) }{h ^ {2} } , $$

$$ ( \Lambda _ {2} u) _ {i, j } = u _ {\overline{x} _ {2} x _ {2} , i, j } = \frac{(u _ {i, j + 1 } - 2u _ {i, j } + u _ {i, j - 1 } ) }{h ^ {2} } , $$

where $ u _ {i, j } = u (x _ {1} ^ {(i)} , x _ {2} ^ {(j)} ) $. The difference scheme is

$$ \tag{1 } ( \Lambda _ {h} y) _ {i, j } = \ y _ {\overline{x} _ {1} x _ {1} , i, j } + y _ {\overline{x} _ {2} x _ {2} , i, j } = \ - f _ {i, j } , $$

$$ i, j = 1, \dots, N - 1; \ \left . y _ {i, j } \right | _ {\Gamma _ {h} } = \mu _ {i, j } , $$

where $ y _ {i, j } $ is the solution.

Problem (1) has a — unique — solution for any inhomogeneous term $ f $ and any boundary conditions $ \mu $( see [2]). Moreover, the solution of the difference scheme (1) converges as $ h \rightarrow 0 $ to the solution of the original problem in such a way that the scheme is a second-order approximation in the maximum norm, i.e.

$$ \max _ {(x _ {1} , x _ {2} ) \in G _ {h} } | y (x _ {1} , x _ {2} ) - u (x _ {1} , x _ {2} ) | \leq Mh ^ {2} , $$

where $ M $ is a constant independent of $ h $.

The difference scheme (1) is a system of linear algebraic equations, characteristically with a large number of equations (to be precise: $ (N - 1) ^ {2} $ equations, with $ N \rightarrow \infty $ as $ h \rightarrow 0 $) and a large number of zeros in the matrix of the system; moreover, the system is generally ill-conditioned (the ratio of the minimum eigen value to the maximum one is of the same order of magnitude as $ h ^ {2} $, $ h \rightarrow 0 $). Such systems of equations, which arise when differential equations are replaced by difference equations, may be solved using various efficient methods, both direct and iterative. Direct methods yield an exact solution of the difference equation after a finite number of arithmetic operations. This category includes various versions of the double-sweep method, including matrix double-sweep, the decomposition method, the fast Fourier transform, and the method of representations by sums (see [1], [2], [3], [6]). A measure of the efficiency of direct methods is the order of magnitude of the number of operations as $ h \rightarrow 0 $. Thus, solving problem (1) by matrix double-sweep requires $ O (h ^ {-4} ) $ operations, whereas the fast Fourier transform method needs — for the same problem — $ O (h ^ {-2} \log h ^ {-1} ) $ operations. The most widely used iterative methods for the solution of difference problems are Richardson's method with Chebyshev interpolation points, the alternating-triangular iterative method, and various alternating-direction methods (see [2]). The efficiency of iterative methods is measured by the order of magnitude of the minimum number of iterations, $ n _ {0} ( \epsilon ) $, necessary to decrease the error of the initial approximation by a factor $ 1/ \epsilon $. For example, in solving problem (1) by Richardson's method, $ n _ {0} ( \epsilon ) $ is of the order $ h ^ {-1} \log (1/ \epsilon ) $, while the alternating-direction method with optimally-selected iteration parameters has $ n _ {0} ( \epsilon ) = O ( \log h ^ {-1} \log ( 1/ \epsilon )) $. Iterative methods are more universal and simpler to implement than direct methods, and consequently have been widely utilized to solve difference problems.

For example, consider solving the first boundary value problem for the heat equation:

$$ \tag{2 } \left . \begin{array}{ll} \frac{\partial u }{\partial t } = \ \frac{\partial ^ {2} u }{\partial x _ {1} ^ {2} } + \frac{\partial ^ {2} u }{\partial x _ {2} ^ {2} } , & t > 0,\ (x _ {1} , x _ {2} ) \in G \\ u (x _ {1} , x _ {2} , 0) = \ u _ {0} (x _ {1} , x _ {2} ); &{} \\ u (x _ {1} , x _ {2} , t) = 0, & (x _ {1} , x _ {2} ) \in \Gamma . \\ \end{array} \ \right \} $$

To solve this problem, one prescribes a time grid with spacing $ \tau > 0 $:

$$ \omega _ \tau = \ \{ {t _ {n} = n \tau } : { n = 0, 1 ,\dots } \} $$

and the grid $ G _ {h} $( see above) for the space variables. Let $ u _ {i, j } ^ {n} = u (x _ {1} ^ {(i)} , x _ {2} ^ {(j)} , t _ {n} ) $. The derivative $ u _ {t} ^ \prime $ is approximated by the quotient

$$ u _ {t, i, j } = \ \frac{u _ {i, j } ^ {n + 1 } - u _ {i, j } ^ {n} } \tau $$

and the Laplacian $ \Delta $ by the difference operator $ \Delta _ {h} $. The original equation (2) is replaced by a corresponding difference scheme

$$ \tag{3 } \left . \begin{array}{l} y _ {t, i, j } = \ \sigma ( \Delta _ {h} y) _ {i, j } ^ {n + 1 } + (1 - \sigma ) ( \Delta _ {h} y) _ {i, j } ^ {n} ; \\ y _ {i, j } ^ {0} = \ u _ {0} ( x _ {1} ^ {(i)} , x _ {2} ^ {(j)} ) ,\ \ i, j = 1, \dots, N - 1; \\ \left . y _ {i,j} ^ {n} \right | _ {\Gamma _ {h} \times \omega _ \tau } = 0. \\ \end{array} \ \right \} $$

The parameter $ \sigma $ appearing in these equations determines the stability and accuracy of the scheme. If $ \sigma \geq 0.5 $, the scheme is stable for any grid spacings $ \tau $ and $ h $( an absolutely-stable difference scheme). But if $ \sigma < 0.5 $, the scheme is stable only if some restriction is imposed on $ \gamma = \tau /h ^ {2} $( a conditionally-stable difference scheme); for example, the explicit scheme ( $ \sigma = 0 $) is stable if $ \gamma \leq 1/4 $. If $ \sigma = 0.5 $, the scheme is second-order accurate with respect to $ \tau $ and $ h $; for other values of $ \sigma $ it is first-order accurate in $ \tau $ and second-order accurate in $ h $. The difference problem (3) is solved "in layers". The $ n $-th layer is the set of all points of the grid $ G _ {h} \times \omega _ \tau $ for some fixed $ n $. The values of $ y _ {i, j } ^ {0} $ on the zero-th layer ( $ n = 0 $) are known from the initial conditions. If the values on the $ n $-th layer are known, then the values on the next layer $ y _ {i, j } = y _ {i, j } ^ {n + 1 } $ are determined from the system of equations

$$ \tag{4 } \left . \begin{array}{l} y _ {i, j } - \sigma \tau ( \Delta _ {h} y) _ {i, j } = \ f _ {i, j } ^ {n} ,\ \ i, j = 1, \dots, N - 1, \\ \left . y _ {i, j } \right | _ {\Gamma _ {h} } = 0, \\ \end{array} \ \right \} $$

where

$$ f _ {i, j } ^ {n} = \ y _ {i, j } ^ {n} + (1 - \sigma ) \tau ( \Delta _ {h} y) _ {i, j } ^ {n} . $$

The solution of problem (4) may be found by any of the methods for solving the stationary problem (1). However, there are more economical algorithms for the solution of multi-dimensional non-stationary boundary value problems, namely alternating-direction methods (see [1][5]), which enables one to reduce the solution of a multi-dimensional problem to that of a sequence of one-dimensional problems. Thus, the heat equation may be solved using the following alternating-direction scheme:

$$ \left . \begin{array}{l} \frac{y _ {i, j } ^ {n + 1/2 } - y _ {i, j } ^ {n} }{0.5 \tau } = \ \Lambda _ {1} y _ {i, j } ^ {n + 1/2 } + \Lambda _ {2} y _ {i, j } ^ {n} , \\ \frac{y _ {i, j } ^ {n + 1 } - y _ {i, j } ^ {n + 1/2 } }{0.5 \tau } = \ \Lambda _ {1} y _ {i, j } ^ {n + 1/2 } + \Lambda _ {2} y _ {i, j } ^ {n + 1 } . \\ \end{array} \ \right \} $$

This scheme is absolutely stable, second-order accurate and is solved by successive inversion of one-dimensional difference operators.

Even when the difference problem is solved exactly, the solution may well be different — not only quantitatively but also qualitatively — from the solution of the original differential problem. The discrepancy is particularly prominent in dealing with equations the coefficients — or solutions — of which involve singularities (thus, in computing discontinuous solutions of the equations of gas dynamics one usually encounters zones in which the discontinuities are strongly "spread out" ). Thus, direct approximation of a differential problem by a difference problem, when the derivatives are replaced — with a considerable degree of arbitrariness — by difference quotients, does not always yield a good difference scheme. Various principles have been developed for the construction of difference schemes of good quality. Among the successful methods are the balance method and the method of variational-functional approximation (see [1][3]). The difference schemes obtained in these methods correctly reflect the integral laws of conservation which are valid for the original equations, and ensure that the relevant difference operators have fixed signs. In the theory of homogeneous difference schemes (see [7]) one considers questions of constructing difference schemes for equations with variable (including discontinuous) coefficients and investigating their convergence.

References

[1] A.N. Tikhonov, A.A. Samarskii, "Equations of mathematical physics" , Pergamon (1963) (Translated from Russian) MR0165209 Zbl 0111.29008
[2] A.A. Samarskii, G.I. Marchuk, "The theory of difference schemes" , Moscow (1983) (In Russian) MR1818323 MR0514784 MR0378439 MR0347102 MR0438725 MR0207234 MR0203968 Zbl 0971.65076 Zbl 1090.65500 Zbl 0368.65032 Zbl 0359.65083 Zbl 0155.43603
[3] G.I. Marchuk, "Methods of numerical mathematics" , Springer (1982) (Translated from Russian) MR0661258 Zbl 0485.65003
[4] N.N. Yanenko, "The method of fractional steps: solution of problems of mathematical physics in several variables" , Springer (1971) (Translated from Russian) MR307493
[5] E.G. D'yakonov, "Difference methods for solving boundary value problems" , 1–2 , Moscow (1971–1972) (In Russian) Zbl 0434.65078 Zbl 0308.65051
[6] G.N. Polozhii, "Numerical solution of two- and three-dimensional boundary value problems of mathematical physics and functions of a discrete argument" , Kiev (1962) (In Russian)
[7] A.N. Tikhonov, A.A. Samarskii, "On homogeneous difference schemes" Zh. Vychisl. Mat. i Mat. Fiz. , 1 : 1 (1961) pp. 5–63 (In Russian) MR168127 Zbl 0125.07303

Comments

In Western literature, boundary value problems are usually called initial-boundary value problems if the time variable is involved (e.g. problem (2) of this article). The literature on the numerical solution of partial differential equations is overwhelming; only a few good text books are listed below. References [a5] and [a8] only deal with initial-boundary value problems.

References

[a1] G.E. Forsythe, W.R. Wasow, "Finite difference methods for partial differential equations" , Wiley (1960) MR0130124 Zbl 0099.11103
[a2] P.R. Garabedian, "Partial differential equations" , Wiley (1964) MR0162045 Zbl 0124.30501
[a3] I. Gladwell (ed.) R. Wait (ed.) , A survey of numerical methods for partial differential equations , Clarendon Press (1979) MR0569444 Zbl 0417.65047
[a4] W.F. Griffiths, "The finite difference method in partial differential equations" , Wiley (1980) MR0562915 Zbl 0417.65048
[a5] R.D. Richtmeyer, K.W. Morton, "Difference methods for initial value problems" , Wiley (1967) MR220455
[a6] A.A. Samarskii, "Theorie der Differenzverfahren" , Akad. Verlagsgesell. Geest u. Portig K.-D. (1984) (Translated from Russian)
[a7] G.D. Smith, "Numerical solution of partial differential equations" , Oxford Univ. Press (1977) MR0827497 MR0509636 MR0391467 MR1534099 MR0282538 Zbl 0576.65089 Zbl 0389.65040 Zbl 0123.11806
[a8] N.N. Yanenko, "The method of fractional steps; the solution of problems of mathematical physics in several variables" , Springer (1971) (Translated from Russian) MR0307493 Zbl 0356.65109 Zbl 0209.47103 Zbl 0183.18201 Zbl 0099.33502
How to Cite This Entry:
Boundary value problem, numerical methods for partial differential equations. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Boundary_value_problem,_numerical_methods_for_partial_differential_equations&oldid=14004
This article was adapted from an original article by A.V. Gulin (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article