Namespaces
Variants
Actions

Difference between revisions of "Non-linear boundary value problem, numerical methods"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (tex encoded by computer)
Line 1: Line 1:
 +
<!--
 +
n0670801.png
 +
$#A+1 = 112 n = 0
 +
$#C+1 = 112 : ~/encyclopedia/old_files/data/N067/N.0607080 Non\AAhlinear boundary value problem, numerical methods
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
Methods replacing a boundary value problem by a discrete problem (see [[Linear boundary value problem, numerical methods|Linear boundary value problem, numerical methods]] and [[Non-linear equation, numerical methods|Non-linear equation, numerical methods]]). In many cases, especially in the discussion of boundary value problems for systems of ordinary differential equations, the description of numerical methods usually proceeds without indication of a discretization of the original problem, but such a discretization is nevertheless implicit and is realized relative to a known model.
 
Methods replacing a boundary value problem by a discrete problem (see [[Linear boundary value problem, numerical methods|Linear boundary value problem, numerical methods]] and [[Non-linear equation, numerical methods|Non-linear equation, numerical methods]]). In many cases, especially in the discussion of boundary value problems for systems of ordinary differential equations, the description of numerical methods usually proceeds without indication of a discretization of the original problem, but such a discretization is nevertheless implicit and is realized relative to a known model.
  
 
Suppose, for example, that a two-point boundary value problem is to be discussed for a system of ordinary differential equations
 
Suppose, for example, that a two-point boundary value problem is to be discussed for a system of ordinary differential equations
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n0670801.png" /></td> <td valign="top" style="width:5%;text-align:right;">(1)</td></tr></table>
+
$$ \tag{1 }
 +
y  ^  \prime  = f ( x , y ) \ \
 +
\textrm{ for }  0 < x < X ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n0670802.png" /></td> <td valign="top" style="width:5%;text-align:right;">(2)</td></tr></table>
+
$$ \tag{2 }
 +
B ( y ( 0) )  = 0 ,\  D ( y ( X) ) =  0 ,
 +
$$
  
 
where
 
where
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n0670803.png" /></td> </tr></table>
+
$$
 +
y  \equiv  ( y _ {1} \dots y _ {l} )  ^ {T} ,\ \
 +
f  \equiv  ( f _ {1} \dots f _ {l} )  ^ {T} ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n0670804.png" /></td> </tr></table>
+
$$
 +
= ( B _ {1} \dots B _ {l-} r )  ^ {T} ,\  D  \equiv  ( D _ {1} \dots D _ {r} )  ^ {T} ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n0670805.png" /></td> </tr></table>
+
$$
 +
0 \leq  r  \leq  l .
 +
$$
  
Suppose that the vector <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n0670806.png" /> is such that the system of equations
+
Suppose that the vector $  G ( y) = ( G _ {1} ( y) \dots G _ {r} ( y) ) $
 +
is such that the system of equations
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n0670807.png" /></td> <td valign="top" style="width:5%;text-align:right;">(3)</td></tr></table>
+
$$ \tag{3 }
 +
B ( y)  = 0 ,\ \
 +
G ( y) =  g
 +
$$
  
uniquely determines a vector <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n0670808.png" /> for every vector <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n0670809.png" />; for example, it is often expedient to take
+
uniquely determines a vector $  y = \alpha ( g) $
 +
for every vector $  g \equiv ( g _ {1} \dots g _ {r} )  ^ {T} $;  
 +
for example, it is often expedient to take
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708010.png" /></td> </tr></table>
+
$$
 +
G _ {1}  \equiv  y _ {l - r + 1 }  \dots  G _ {r}  \equiv  y _ {l} .
 +
$$
  
 
Then the problem (1)–(2) can be reduced to the system (operator equation)
 
Then the problem (1)–(2) can be reduced to the system (operator equation)
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708011.png" /></td> <td valign="top" style="width:5%;text-align:right;">(4)</td></tr></table>
+
$$ \tag{4 }
 +
\psi ( g)  = 0 ,
 +
$$
 +
 
 +
where  $  \psi ( g) = D ( \omega ( g) ) $
 +
and  $  \omega ( g) $
 +
is the value at  $  x = X $
 +
of the solution of (1) for the initial condition  $  y ( 0) = \alpha ( g) $.  
 +
To obtain the values of  $  \psi ( g) $
 +
one has to solve numerically (see [[#References|[1]]], [[#References|[2]]]) the corresponding system of differential equations, that is, to use a discretization of the original problem. To solve the system (4) one can use various iteration methods for the solution of non-linear equations. Having found a solution  $  g  ^ {*} $
 +
of (4) one can determine the vector  $  y  ^ {*} = \alpha ( g  ^ {*} ) $
 +
from the system (3) and, by solving (1) for the initial condition  $  y ( 0) = y  ^ {*} $,
 +
obtain a solution of the problem (1)–(2). In a number of methods, apart from the values of  $  \psi $
 +
one also uses the value of its derivative, to be found either by integrating the equation
 +
 
 +
$$ \tag{5 }
 +
\eta  ^  \prime  = \
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708012.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708013.png" /> is the value at <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708014.png" /> of the solution of (1) for the initial condition <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708015.png" />. To obtain the values of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708016.png" /> one has to solve numerically (see [[#References|[1]]], [[#References|[2]]]) the corresponding system of differential equations, that is, to use a discretization of the original problem. To solve the system (4) one can use various iteration methods for the solution of non-linear equations. Having found a solution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708017.png" /> of (4) one can determine the vector <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708018.png" /> from the system (3) and, by solving (1) for the initial condition <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708019.png" />, obtain a solution of the problem (1)–(2). In a number of methods, apart from the values of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708020.png" /> one also uses the value of its derivative, to be found either by integrating the equation
+
\frac{\partial  f }{\partial  y }
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708021.png" /></td> <td valign="top" style="width:5%;text-align:right;">(5)</td></tr></table>
+
\eta
 +
$$
  
 
(the variational equation for (1)) or by means of some formula for numerical differentiation. This way of reducing a boundary value problem to (4) is called the shooting method, as applied to a problem of more general form
 
(the variational equation for (1)) or by means of some formula for numerical differentiation. This way of reducing a boundary value problem to (4) is called the shooting method, as applied to a problem of more general form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708022.png" /></td> </tr></table>
+
$$
 +
y  ^  \prime  = f ( x , y , g ) \ \
 +
\textrm{ for }  0 < x < X ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708023.png" /></td> </tr></table>
+
$$
 +
y ( 0)  = \alpha ( g) ,\  D ( y ( X ( g) , g ))  = 0,
 +
$$
  
containing a certain vector parameter <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708024.png" /> explicitly, in particular, to an eigen value problem; here the function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708025.png" /> is assumed to be given. However, if the solution of (5) grows rapidly with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708026.png" />, then the error of the numerical integration leads to large errors in the values of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708027.png" /> and its derivative and ultimately to a large error of the resulting solution.
+
containing a certain vector parameter $  g $
 +
explicitly, in particular, to an eigen value problem; here the function $  \alpha ( g) $
 +
is assumed to be given. However, if the solution of (5) grows rapidly with $  x $,  
 +
then the error of the numerical integration leads to large errors in the values of $  \psi ( g) $
 +
and its derivative and ultimately to a large error of the resulting solution.
  
 
In the method of linearization (see, for example, [[#References|[1]]], [[#References|[3]]]) approximations to a solution of the problem (1)–(2) are determined as solutions of a sequence of linear equations
 
In the method of linearization (see, for example, [[#References|[1]]], [[#References|[3]]]) approximations to a solution of the problem (1)–(2) are determined as solutions of a sequence of linear equations
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708028.png" /></td> <td valign="top" style="width:5%;text-align:right;">(6)</td></tr></table>
+
$$ \tag{6 }
 +
y _ {n+} 1  ^  \prime  = \
 +
A _ {n} ( x) ( y _ {n+} 1 - y _ {n} ) + f ( x , y _ {n} ) \ \
 +
\textrm{ for }  0 < x < X
 +
$$
  
 
with the linear boundary conditions
 
with the linear boundary conditions
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708029.png" /></td> <td valign="top" style="width:5%;text-align:right;">(7)</td></tr></table>
+
$$ \tag{7 }
 +
\left .
 +
 
 +
where  $  A _ {n} ( x) $
 +
is a square matrix of order  $  l $
 +
consisting of functions and  $  B _ {n} $
 +
and  $  D _ {n} $
 +
are numerical  $  (( l- r ) \times n ) $-
 +
and  $  ( r \times l ) $-
 +
matrices,  $  n = 0 , 1 , .  .  . $.  
 +
The initial approximation  $  y _ {0} ( x) $
 +
is assumed to be known. In the case of the Newton–Kantorovich method (cf. [[Kantorovich process|Kantorovich process]]; [[Newton method|Newton method]]) the equations (6) and (7) take the form
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708030.png" /> is a square matrix of order <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708031.png" /> consisting of functions and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708032.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708033.png" /> are numerical <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708034.png" />- and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708035.png" />-matrices, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708036.png" />. The initial approximation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708037.png" /> is assumed to be known. In the case of the Newton–Kantorovich method (cf. [[Kantorovich process|Kantorovich process]]; [[Newton method|Newton method]]) the equations (6) and (7) take the form
+
$$ \tag{8 }
 +
y _ {n+} 1  ^  \prime  = \
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708038.png" /></td> <td valign="top" style="width:5%;text-align:right;">(8)</td></tr></table>
+
\frac{\partial  f }{\partial  y }
 +
( x , y _ {n} ) ( y _ {n+} 1 - y _ {n} )
 +
+ f ( x , y _ {n} ( x) ) \ \
 +
\textrm{ for }  0 < x < X ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708039.png" /></td> <td valign="top" style="width:5%;text-align:right;">(9)</td></tr></table>
+
$$ \tag{9 }
 +
\left .
  
If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708040.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708041.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708042.png" /> are twice continuously differentiable in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708043.png" />, if the problems (8)–(9) are well-posed, and if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708044.png" /> is sufficiently close to the required solution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708045.png" />, then
+
If $  f ( x , y ) $,  
 +
$  B ( y) $
 +
and $  D ( y) $
 +
are twice continuously differentiable in $  y $,  
 +
if the problems (8)–(9) are well-posed, and if $  y _ {0} ( x) $
 +
is sufficiently close to the required solution $  y ( x) $,  
 +
then
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708046.png" /></td> </tr></table>
+
$$
 +
\| y _ {n} - y \| _ {C}  \leq  K  ^ {-} 1
 +
( K  \| y _ {0} - y \| _ {C} )  ^ {2n} ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708047.png" /> is a constant and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708048.png" /> (see [[#References|[1]]]).
+
where $  K > 0 $
 +
is a constant and $  \| y \| _ {C} \equiv \max _ {0 \leq  x \leq  X }  | y ( x) | $(
 +
see [[#References|[1]]]).
  
 
The methods mentioned generalize to the more general case of conditions of the form
 
The methods mentioned generalize to the more general case of conditions of the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708049.png" /></td> </tr></table>
+
$$
 +
A ( y ( X _ {0} ) , y ( X _ {1} ) \dots y ( X _ {m} ) )  = 0 ,
 +
$$
  
 
where
 
where
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708050.png" /></td> </tr></table>
+
$$
 +
A  \equiv  ( A _ {1} \dots A _ {l} )  ^ {T} ,\ \
 +
= X _ {0}  < X _ {1}  < \dots < X _ {m}  \equiv  X .
 +
$$
  
 
Many important theoretical and applied problems lead to the need of solving non-linear boundary value problems (and related problems) for equations and systems of equations of elliptic type (see, for example, [[#References|[4]]]–[[#References|[8]]]). For such a class of problems the basic numerical methods are projection methods (projection-grid, variational-difference, finite element) and difference methods (see [[#References|[7]]]–[[#References|[17]]]). Their constructions are in many respects analogous to those in the corresponding methods for linear boundary value problems (see [[Laplace equation, numerical methods|Laplace equation, numerical methods]]; [[Poisson equation, numerical methods|Poisson equation, numerical methods]]). However, in these methods the resulting discrete (grid) analogues to boundary value problems, being systems of non-linear equations
 
Many important theoretical and applied problems lead to the need of solving non-linear boundary value problems (and related problems) for equations and systems of equations of elliptic type (see, for example, [[#References|[4]]]–[[#References|[8]]]). For such a class of problems the basic numerical methods are projection methods (projection-grid, variational-difference, finite element) and difference methods (see [[#References|[7]]]–[[#References|[17]]]). Their constructions are in many respects analogous to those in the corresponding methods for linear boundary value problems (see [[Laplace equation, numerical methods|Laplace equation, numerical methods]]; [[Poisson equation, numerical methods|Poisson equation, numerical methods]]). However, in these methods the resulting discrete (grid) analogues to boundary value problems, being systems of non-linear equations
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708051.png" /></td> <td valign="top" style="width:5%;text-align:right;">(10)</td></tr></table>
+
$$ \tag{10 }
 +
L _ {N} ( \overline{u}\; _ {N} )  = \overline{f}\; _ {N} ,
 +
$$
  
 
where
 
where
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708052.png" /></td> </tr></table>
+
$$
 +
\overline{u}\; _ {N}  \equiv  ( u _ {1} \dots u _ {N} )  ^ {T} \ \
 +
\textrm{ and } \ \
 +
\overline{f}\; _ {N}  \equiv  ( f _ {1} \dots f _ {N} )  ^ {T} ,
 +
$$
  
 
often involve considerable additional difficulties both with respect to the analysis of the systems themselves and the proximity in one sense or another of their solution to that of the original problem, and with respect to the input of numerical work for specifying the systems (10) and the search for their solutions. The specific nature of the non-linearity of a boundary value problem forces one to pay special attention to the choice of Banach spaces and their grid analogues in which the problem itself and its finite-dimensional approximations lend themselves to analysis (see, for example, [[#References|[5]]]–[[#References|[12]]], [[#References|[15]]]). But the most detailed study, especially of algorithms, has been made only of numerical methods for classes of non-linear problems related in a certain sense to linear problems (weakly non-linear equations, equations with restricted non-linearity), thus making it possible to use the theory of Hilbert spaces and their finite-dimensional analogues (Euclidean spaces).
 
often involve considerable additional difficulties both with respect to the analysis of the systems themselves and the proximity in one sense or another of their solution to that of the original problem, and with respect to the input of numerical work for specifying the systems (10) and the search for their solutions. The specific nature of the non-linearity of a boundary value problem forces one to pay special attention to the choice of Banach spaces and their grid analogues in which the problem itself and its finite-dimensional approximations lend themselves to analysis (see, for example, [[#References|[5]]]–[[#References|[12]]], [[#References|[15]]]). But the most detailed study, especially of algorithms, has been made only of numerical methods for classes of non-linear problems related in a certain sense to linear problems (weakly non-linear equations, equations with restricted non-linearity), thus making it possible to use the theory of Hilbert spaces and their finite-dimensional analogues (Euclidean spaces).
  
Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708053.png" /> be a Euclidean space of vector-valued functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708054.png" /> with scalar product <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708055.png" />, and let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708056.png" /> be the set of self-adjoint positive-definite linear operators mapping <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708057.png" /> to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708058.png" />; alongside with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708059.png" /> one also uses the Euclidean space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708060.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708061.png" />, that differs from <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708062.png" /> only in the form of scalar product:
+
Let $  H _ {N} \equiv H $
 +
be a Euclidean space of vector-valued functions $  \overline{u}\; _ {N} \equiv u $
 +
with scalar product $  ( u , v ) \equiv \sum _ {i=} 1  ^ {N} u _ {i} v _ {i} $,  
 +
and let $  l ( H) $
 +
be the set of self-adjoint positive-definite linear operators mapping $  H $
 +
to $  H $;  
 +
alongside with $  H $
 +
one also uses the Euclidean space $  H _ {B} $,  
 +
$  B \in l ( H) $,  
 +
that differs from $  H $
 +
only in the form of scalar product:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708063.png" /></td> </tr></table>
+
$$
 +
( u , v ) _ {H _ {B}  }  \equiv \
 +
( u , v) _ {B}  \equiv \
 +
( B u , v ) ,\ \
 +
\| u \| _ {B}  \equiv \
 +
( B u , u )  ^ {1/2} .
 +
$$
  
 
Next, let
 
Next, let
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708064.png" /></td> </tr></table>
+
$$
 +
S _ {B} ( r)  \equiv \
 +
\{ {u } : {\| u \| _ {B} \leq  r } \}
 +
,\ \
 +
r > 0 .
 +
$$
  
If the operator <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708065.png" /> is continuous on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708066.png" /> and such that for any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708067.png" /> with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708068.png" />,
+
If the operator $  L _ {N} $
 +
is continuous on $  S _ {B} ( r) $
 +
and such that for any $  u $
 +
with $  \| u \| _ {B} = r $,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708069.png" /></td> <td valign="top" style="width:5%;text-align:right;">(11)</td></tr></table>
+
$$ \tag{11 }
 +
( L _ {N} ( u) - \overline{f}\; _ {N} , u )  \geq  0 ,
 +
$$
  
then the system (10) has at least one solution in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708070.png" />. This assertion (see, for example, [[#References|[9]]]) is one of the consequences of the classical existence theorems for solutions of non-linear operator equations based on topological principles (see, for example, [[#References|[14]]]) and on the monotonicity of the operators (see, for example, [[#References|[6]]], [[#References|[11]]], [[#References|[14]]]). In particular, if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708071.png" /> is continuous everywhere and if
+
then the system (10) has at least one solution in $  S _ {B} ( r) $.  
 +
This assertion (see, for example, [[#References|[9]]]) is one of the consequences of the classical existence theorems for solutions of non-linear operator equations based on topological principles (see, for example, [[#References|[14]]]) and on the monotonicity of the operators (see, for example, [[#References|[6]]], [[#References|[11]]], [[#References|[14]]]). In particular, if $  L _ {N} $
 +
is continuous everywhere and if
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708072.png" /></td> <td valign="top" style="width:5%;text-align:right;">(12)</td></tr></table>
+
$$ \tag{12 }
 +
( L _ {N} ( u) , u ) \geq  \
 +
\delta  \| u \| _ {B}  ^ {2} ,\ \
 +
\delta > 0 ,
 +
$$
  
for any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708073.png" />, then the system (10) always has a solution and any one of them belongs to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708074.png" /> with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708075.png" />. Moreover, if for any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708076.png" />, the inequalities
+
for any $  u $,  
 +
then the system (10) always has a solution and any one of them belongs to $  S _ {B} ( r) $
 +
with $  r = \delta  ^ {-} 1  \| \overline{f}\; _ {N} \| _ {B}  ^ {-} 1 $.  
 +
Moreover, if for any $  u , v , w $,  
 +
the inequalities
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708077.png" /></td> <td valign="top" style="width:5%;text-align:right;">(13)</td></tr></table>
+
$$ \tag{13 }
 +
( L _ {N} ( u) - L _ {N} ( v) , u- v )  \geq  \
 +
\delta _ {0}  \| u - v \| _ {B}  ^ {2} ,\ \
 +
\delta _ {0} > 0,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708078.png" /></td> <td valign="top" style="width:5%;text-align:right;">(14)</td></tr></table>
+
$$ \tag{14 }
 +
| ( L _ {N} ( u) - L _ {N} ( v) , w ) |  \leq  \
 +
\delta _ {1}  \| u - v \| _ {B}  \| w \| _ {B}  $$
  
hold, which indicate the strong monotonicity and Lipschitz continuity of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708079.png" />, then (10) and its unique solution can be found by the iteration method
+
hold, which indicate the strong monotonicity and Lipschitz continuity of $  L _ {N} $,  
 +
then (10) and its unique solution can be found by the iteration method
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708080.png" /></td> <td valign="top" style="width:5%;text-align:right;">(15)</td></tr></table>
+
$$ \tag{15 }
 +
A u  ^ {n+} 1  = \
 +
A u  ^ {n} - \gamma
 +
( L _ {N} ( u  ^ {n} ) - \overline{f}\; _ {N} ) ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708081.png" /> and the iteration parameter <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708082.png" /> is determined by constants like <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708083.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708084.png" />, which are obtained when in (13) and (14) the operator <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708085.png" /> is replaced by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708086.png" /> (see, for example, [[#References|[8]]], [[#References|[9]]]). Hence, for difference methods estimates of the error result as a consequence of the fact that the system (10) is well-posed and that an approximation of the original boundary value problem by a difference boundary value problem is available. It is frequently expedient to choose the operator <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708087.png" /> so that the space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708088.png" /> becomes a discrete (grid) analogue of the corresponding Sobolev space (for example, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708089.png" /> for second-order equations), and so it becomes possible to use either the imbedding theorems themselves (in the case of projection approximations) or their grid analogues (see, for example, [[#References|[5]]]–[[#References|[9]]]) for the verification of inequalities like (12)–(14). Projection methods and, in particular, projection-grid (finite-element) methods have the merit that (12)–(14) follow from the corresponding inequalities for differential operators, and the task of estimating the error of the method easily reduces to that of approximating the solutions of the original problem by elements of a selected finite-dimensional subspace (see [[#References|[6]]]–[[#References|[9]]], [[#References|[11]]]). Instead of the validity of (13)–(14) in the whole space it suffices that they hold, for example, in some ball in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708090.png" /> surrounding the required solution. Sometimes useful inequalities are obtained from (12)–(14) when the scalar products on the left-hand sides are replaced by scalar products in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708091.png" /> and the norm <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708092.png" /> on the right-hand sides by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708093.png" />. In a number of problems, on the basis of a priori estimates of their solutions, one can replace them by equivalent boundary value problems for which inequalities like (12)–(14) are satisfied (see [[#References|[9]]]).
+
where $  A \in l ( H) $
 +
and the iteration parameter $  \gamma > 0 $
 +
is determined by constants like $  \delta _ {0} $
 +
and $  \delta _ {1} $,  
 +
which are obtained when in (13) and (14) the operator $  B $
 +
is replaced by $  A $(
 +
see, for example, [[#References|[8]]], [[#References|[9]]]). Hence, for difference methods estimates of the error result as a consequence of the fact that the system (10) is well-posed and that an approximation of the original boundary value problem by a difference boundary value problem is available. It is frequently expedient to choose the operator $  B $
 +
so that the space $  H _ {B} $
 +
becomes a discrete (grid) analogue of the corresponding Sobolev space (for example, $  W _ {2}  ^ {1} ( \Omega ) $
 +
for second-order equations), and so it becomes possible to use either the imbedding theorems themselves (in the case of projection approximations) or their grid analogues (see, for example, [[#References|[5]]]–[[#References|[9]]]) for the verification of inequalities like (12)–(14). Projection methods and, in particular, projection-grid (finite-element) methods have the merit that (12)–(14) follow from the corresponding inequalities for differential operators, and the task of estimating the error of the method easily reduces to that of approximating the solutions of the original problem by elements of a selected finite-dimensional subspace (see [[#References|[6]]]–[[#References|[9]]], [[#References|[11]]]). Instead of the validity of (13)–(14) in the whole space it suffices that they hold, for example, in some ball in $  H _ {B} $
 +
surrounding the required solution. Sometimes useful inequalities are obtained from (12)–(14) when the scalar products on the left-hand sides are replaced by scalar products in $  H _ {B} $
 +
and the norm $  \| z \| _ {B} $
 +
on the right-hand sides by $  \| B z \| $.  
 +
In a number of problems, on the basis of a priori estimates of their solutions, one can replace them by equivalent boundary value problems for which inequalities like (12)–(14) are satisfied (see [[#References|[9]]]).
  
When (13) and (14) hold, then the iterative method (15) with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708094.png" /> converges for any initial approximation, and if the constants <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708095.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708096.png" /> are independent of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708097.png" />, then one can obtain a solution of (10) with accuracy <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708098.png" /> in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n06708099.png" /> iterations (usually <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n067080100.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n067080101.png" />, that is, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n067080102.png" />). At the expense of a successful construction of a grid method one sometimes manages to choose the operator <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n067080103.png" /> so that the solution of the system (10) to within <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n067080104.png" /> by means of an iteration method like (15) is achieved by performing only <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n067080105.png" /> or even <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n067080106.png" /> arithmetic operations (see [[Minimization of the labour of calculation|Minimization of the labour of calculation]]). The use of a sequence of refining grids makes it possible to reduce these estimates to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n067080107.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n067080108.png" /> operations and so to obtain asymptotically-optimal numerical methods (relative to the labour involved) for the solution of certain elliptic boundary value problems. Here it is assumed that the numerical residual <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n067080109.png" /> itself requires at most <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n067080110.png" /> operations. Nevertheless, for certain problems (with large <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n067080111.png" />) it is expedient to reduce the frequency of such computations. This can be achieved provided one can find a more accurate initial approximation by the use of some linearization of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067080/n067080112.png" /> and by methods of the type of continuation with respect to a parameter (see [[Non-linear equation, numerical methods|Non-linear equation, numerical methods]]). Most often [[Linearization methods|linearization methods]] in conjunction with methods of continuation for non-linear operators with respect to a parameter (cf. [[Continuation method (to a parametrized family, for non-linear operators)|Continuation method (to a parametrized family, for non-linear operators)]]) are used in the most difficult situations, for example, when (13) fails to hold. Certain eigen value problems (see, for example, [[#References|[9]]], [[#References|[12]]], [[#References|[14]]]) can also be considered as non-linear boundary value problems of elliptic type; the construction of the relevant numerical methods is in many respects analogous to the ones already mentioned.
+
When (13) and (14) hold, then the iterative method (15) with $  A = B $
 +
converges for any initial approximation, and if the constants $  \delta _ {0} $
 +
and $  \delta _ {1} $
 +
are independent of $  N $,  
 +
then one can obtain a solution of (10) with accuracy $  \epsilon $
 +
in $  O ( |  \mathop{\rm log}  \epsilon | ) $
 +
iterations (usually $  \epsilon \approx N  ^  \alpha  $,  
 +
$  \alpha < 0 $,  
 +
that is, $  0 < c _ {0} \leq  \epsilon N ^ {- \alpha } \leq  c _ {1} $).  
 +
At the expense of a successful construction of a grid method one sometimes manages to choose the operator $  B $
 +
so that the solution of the system (10) to within $  N  ^  \alpha  $
 +
by means of an iteration method like (15) is achieved by performing only $  O ( N  \mathop{\rm log}  ^ {2}  N ) $
 +
or even $  O ( N  \mathop{\rm log}  N ) $
 +
arithmetic operations (see [[Minimization of the labour of calculation|Minimization of the labour of calculation]]). The use of a sequence of refining grids makes it possible to reduce these estimates to $  O ( N  \mathop{\rm log}  N ) $
 +
and $  O ( N) $
 +
operations and so to obtain asymptotically-optimal numerical methods (relative to the labour involved) for the solution of certain elliptic boundary value problems. Here it is assumed that the numerical residual $  L ( u  ^ {n} ) - \overline{f}\; _ {N} $
 +
itself requires at most $  K N $
 +
operations. Nevertheless, for certain problems (with large $  K $)  
 +
it is expedient to reduce the frequency of such computations. This can be achieved provided one can find a more accurate initial approximation by the use of some linearization of $  L _ {N} $
 +
and by methods of the type of continuation with respect to a parameter (see [[Non-linear equation, numerical methods|Non-linear equation, numerical methods]]). Most often [[Linearization methods|linearization methods]] in conjunction with methods of continuation for non-linear operators with respect to a parameter (cf. [[Continuation method (to a parametrized family, for non-linear operators)|Continuation method (to a parametrized family, for non-linear operators)]]) are used in the most difficult situations, for example, when (13) fails to hold. Certain eigen value problems (see, for example, [[#References|[9]]], [[#References|[12]]], [[#References|[14]]]) can also be considered as non-linear boundary value problems of elliptic type; the construction of the relevant numerical methods is in many respects analogous to the ones already mentioned.
  
 
Some other stationary boundary value problems can be treated numerically in the same manner as well (cf. [[#References|[8]]], [[#References|[9]]], [[#References|[17]]]). The numerical methods for non-stationary non-linear boundary value problems include problems for equations and systems of parabolic type, hyperbolic type, mixed type, and others, although discrete analogues of elliptic operators similar to the ones considered above have also been used; however, for them it is even more important to have approximation methods for the individual time layers.
 
Some other stationary boundary value problems can be treated numerically in the same manner as well (cf. [[#References|[8]]], [[#References|[9]]], [[#References|[17]]]). The numerical methods for non-stationary non-linear boundary value problems include problems for equations and systems of parabolic type, hyperbolic type, mixed type, and others, although discrete analogues of elliptic operators similar to the ones considered above have also been used; however, for them it is even more important to have approximation methods for the individual time layers.
Line 111: Line 291:
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  N.S. Bakhvalov,  "Numerical methods: analysis, algebra, ordinary differential equations" , MIR  (1977)  (Translated from Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  G. Sewell,  "The numerical solution of ordinary and partial differential equations" , Acad. Press  (1988)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  R.E. Bellman,  R.E. Kalaba,  "Quasilinearization and nonlinear boundary-value problems" , Elsevier  (1965)  (Translated from Russian)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  A.A. Berezovskii,  "Lectures on the non-linear boundary value problems of mathematical physics" , '''1''' , Kiev  (1976)  (In Russian)</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top">  O.A. Ladyzhenskaya,  N.N. Ural'tseva,  "Linear and quasilinear elliptic equations" , Acad. Press  (1968)  (Translated from Russian)</TD></TR><TR><TD valign="top">[6]</TD> <TD valign="top">  J.-L. Lions,  "Quelques méthodes de résolution des problèmes aux limites nonlineaires" , Dunod  (1969)</TD></TR><TR><TD valign="top">[7]</TD> <TD valign="top">  R. Glowinski,  J.-L. Lions,  R. Trémolières,  "Numerical analysis of variational inequalities" , North-Holland  (1981)  (Translated from French)</TD></TR><TR><TD valign="top">[8]</TD> <TD valign="top">  R. Glowinski,  "Numerical methods for nonlinear variational problems" , Springer  (1984)</TD></TR><TR><TD valign="top">[9]</TD> <TD valign="top">  E.G. D'yakonov,  "Minimization of computational work. Asymptotically-optimal algorithms" , Moscow  (1989)  (In Russian)</TD></TR><TR><TD valign="top">[10]</TD> <TD valign="top">  M.M. Karchevskii,  A.D. Lyashko,  "Difference schemes for non-linear problems of mathematical physics" , Kazan'  (1976)  (In Russian)</TD></TR><TR><TD valign="top">[11]</TD> <TD valign="top">  R.S. Varga,  "Functional analysis and approximation theory in numerical analysis" , ''Reg. Conf. Ser. Appl. Math.'' , '''3''' , SIAM  (1971)</TD></TR><TR><TD valign="top">[12]</TD> <TD valign="top">  G. Strang,  J. Fix,  "An analyse of the finite element method" , Prentice-Hall  (1973)</TD></TR><TR><TD valign="top">[13]</TD> <TD valign="top">  J.T. Oden,  "Finite elements of nonlinear continua" , McGraw-Hill  (1972)</TD></TR><TR><TD valign="top">[14]</TD> <TD valign="top">  M.A. Krasnosel'skii,  G.M. Vainikko,  P.P. Zabreiko,  et al.,  "Approximate solution of operator equations" , Wolters-Noordhoff  (1972)  (Translated from Russian)</TD></TR><TR><TD valign="top">[15]</TD> <TD valign="top">  F. Brezzi,  T. Rappaz,  P. Raviart,  "Finite dimensional approximation of nonlinear problems III"  ''Numer. Math.'' , '''38'''  (1981)  pp. 1–30</TD></TR><TR><TD valign="top">[16]</TD> <TD valign="top">  A.A. Amosov,  N.S. Bakhvalov,  Yu.I. Osipik,  "Iterative processes for the problem of stationary heat exchange in a system of absolutely black bodies"  ''J. Comput. Math. Math. Phys.'' , '''20''' :  1  (1980)  pp. 110–119  ''Zh. Vychisl. Mat. Mat. Fiz'' , '''20''' :  1  (1980)  pp. 104–111</TD></TR><TR><TD valign="top">[17]</TD> <TD valign="top">  F. Thomasset,  "Implementation of finite element methods for Navier–Stokes equations" , Springer  (1981)</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  N.S. Bakhvalov,  "Numerical methods: analysis, algebra, ordinary differential equations" , MIR  (1977)  (Translated from Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  G. Sewell,  "The numerical solution of ordinary and partial differential equations" , Acad. Press  (1988)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  R.E. Bellman,  R.E. Kalaba,  "Quasilinearization and nonlinear boundary-value problems" , Elsevier  (1965)  (Translated from Russian)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  A.A. Berezovskii,  "Lectures on the non-linear boundary value problems of mathematical physics" , '''1''' , Kiev  (1976)  (In Russian)</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top">  O.A. Ladyzhenskaya,  N.N. Ural'tseva,  "Linear and quasilinear elliptic equations" , Acad. Press  (1968)  (Translated from Russian)</TD></TR><TR><TD valign="top">[6]</TD> <TD valign="top">  J.-L. Lions,  "Quelques méthodes de résolution des problèmes aux limites nonlineaires" , Dunod  (1969)</TD></TR><TR><TD valign="top">[7]</TD> <TD valign="top">  R. Glowinski,  J.-L. Lions,  R. Trémolières,  "Numerical analysis of variational inequalities" , North-Holland  (1981)  (Translated from French)</TD></TR><TR><TD valign="top">[8]</TD> <TD valign="top">  R. Glowinski,  "Numerical methods for nonlinear variational problems" , Springer  (1984)</TD></TR><TR><TD valign="top">[9]</TD> <TD valign="top">  E.G. D'yakonov,  "Minimization of computational work. Asymptotically-optimal algorithms" , Moscow  (1989)  (In Russian)</TD></TR><TR><TD valign="top">[10]</TD> <TD valign="top">  M.M. Karchevskii,  A.D. Lyashko,  "Difference schemes for non-linear problems of mathematical physics" , Kazan'  (1976)  (In Russian)</TD></TR><TR><TD valign="top">[11]</TD> <TD valign="top">  R.S. Varga,  "Functional analysis and approximation theory in numerical analysis" , ''Reg. Conf. Ser. Appl. Math.'' , '''3''' , SIAM  (1971)</TD></TR><TR><TD valign="top">[12]</TD> <TD valign="top">  G. Strang,  J. Fix,  "An analyse of the finite element method" , Prentice-Hall  (1973)</TD></TR><TR><TD valign="top">[13]</TD> <TD valign="top">  J.T. Oden,  "Finite elements of nonlinear continua" , McGraw-Hill  (1972)</TD></TR><TR><TD valign="top">[14]</TD> <TD valign="top">  M.A. Krasnosel'skii,  G.M. Vainikko,  P.P. Zabreiko,  et al.,  "Approximate solution of operator equations" , Wolters-Noordhoff  (1972)  (Translated from Russian)</TD></TR><TR><TD valign="top">[15]</TD> <TD valign="top">  F. Brezzi,  T. Rappaz,  P. Raviart,  "Finite dimensional approximation of nonlinear problems III"  ''Numer. Math.'' , '''38'''  (1981)  pp. 1–30</TD></TR><TR><TD valign="top">[16]</TD> <TD valign="top">  A.A. Amosov,  N.S. Bakhvalov,  Yu.I. Osipik,  "Iterative processes for the problem of stationary heat exchange in a system of absolutely black bodies"  ''J. Comput. Math. Math. Phys.'' , '''20''' :  1  (1980)  pp. 110–119  ''Zh. Vychisl. Mat. Mat. Fiz'' , '''20''' :  1  (1980)  pp. 104–111</TD></TR><TR><TD valign="top">[17]</TD> <TD valign="top">  F. Thomasset,  "Implementation of finite element methods for Navier–Stokes equations" , Springer  (1981)</TD></TR></table>
 
 
  
 
====Comments====
 
====Comments====
 
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  A.M. Ascher,  "Numerical solution for boundary value problems for ordinary differential equations" , Prentice-Hall  (1988)</TD></TR></table>
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  A.M. Ascher,  "Numerical solution for boundary value problems for ordinary differential equations" , Prentice-Hall  (1988)</TD></TR></table>

Revision as of 08:03, 6 June 2020


Methods replacing a boundary value problem by a discrete problem (see Linear boundary value problem, numerical methods and Non-linear equation, numerical methods). In many cases, especially in the discussion of boundary value problems for systems of ordinary differential equations, the description of numerical methods usually proceeds without indication of a discretization of the original problem, but such a discretization is nevertheless implicit and is realized relative to a known model.

Suppose, for example, that a two-point boundary value problem is to be discussed for a system of ordinary differential equations

$$ \tag{1 } y ^ \prime = f ( x , y ) \ \ \textrm{ for } 0 < x < X , $$

$$ \tag{2 } B ( y ( 0) ) = 0 ,\ D ( y ( X) ) = 0 , $$

where

$$ y \equiv ( y _ {1} \dots y _ {l} ) ^ {T} ,\ \ f \equiv ( f _ {1} \dots f _ {l} ) ^ {T} , $$

$$ B = ( B _ {1} \dots B _ {l-} r ) ^ {T} ,\ D \equiv ( D _ {1} \dots D _ {r} ) ^ {T} , $$

$$ 0 \leq r \leq l . $$

Suppose that the vector $ G ( y) = ( G _ {1} ( y) \dots G _ {r} ( y) ) $ is such that the system of equations

$$ \tag{3 } B ( y) = 0 ,\ \ G ( y) = g $$

uniquely determines a vector $ y = \alpha ( g) $ for every vector $ g \equiv ( g _ {1} \dots g _ {r} ) ^ {T} $; for example, it is often expedient to take

$$ G _ {1} \equiv y _ {l - r + 1 } \dots G _ {r} \equiv y _ {l} . $$

Then the problem (1)–(2) can be reduced to the system (operator equation)

$$ \tag{4 } \psi ( g) = 0 , $$

where $ \psi ( g) = D ( \omega ( g) ) $ and $ \omega ( g) $ is the value at $ x = X $ of the solution of (1) for the initial condition $ y ( 0) = \alpha ( g) $. To obtain the values of $ \psi ( g) $ one has to solve numerically (see [1], [2]) the corresponding system of differential equations, that is, to use a discretization of the original problem. To solve the system (4) one can use various iteration methods for the solution of non-linear equations. Having found a solution $ g ^ {*} $ of (4) one can determine the vector $ y ^ {*} = \alpha ( g ^ {*} ) $ from the system (3) and, by solving (1) for the initial condition $ y ( 0) = y ^ {*} $, obtain a solution of the problem (1)–(2). In a number of methods, apart from the values of $ \psi $ one also uses the value of its derivative, to be found either by integrating the equation

$$ \tag{5 } \eta ^ \prime = \ \frac{\partial f }{\partial y } \eta $$

(the variational equation for (1)) or by means of some formula for numerical differentiation. This way of reducing a boundary value problem to (4) is called the shooting method, as applied to a problem of more general form

$$ y ^ \prime = f ( x , y , g ) \ \ \textrm{ for } 0 < x < X , $$

$$ y ( 0) = \alpha ( g) ,\ D ( y ( X ( g) , g )) = 0, $$

containing a certain vector parameter $ g $ explicitly, in particular, to an eigen value problem; here the function $ \alpha ( g) $ is assumed to be given. However, if the solution of (5) grows rapidly with $ x $, then the error of the numerical integration leads to large errors in the values of $ \psi ( g) $ and its derivative and ultimately to a large error of the resulting solution.

In the method of linearization (see, for example, [1], [3]) approximations to a solution of the problem (1)–(2) are determined as solutions of a sequence of linear equations

$$ \tag{6 } y _ {n+} 1 ^ \prime = \ A _ {n} ( x) ( y _ {n+} 1 - y _ {n} ) + f ( x , y _ {n} ) \ \ \textrm{ for } 0 < x < X $$

with the linear boundary conditions

$$ \tag{7 } \left . where $ A _ {n} ( x) $ is a square matrix of order $ l $ consisting of functions and $ B _ {n} $ and $ D _ {n} $ are numerical $ (( l- r ) \times n ) $- and $ ( r \times l ) $- matrices, $ n = 0 , 1 , . . . $. The initial approximation $ y _ {0} ( x) $ is assumed to be known. In the case of the Newton–Kantorovich method (cf. [[Kantorovich process|Kantorovich process]]; [[Newton method|Newton method]]) the equations (6) and (7) take the form $$ \tag{8 } y _ {n+} 1 ^ \prime = \

\frac{\partial f }{\partial y }

( x , y _ {n} ) ( y _ {n+} 1 - y _ {n} )

+ f ( x , y _ {n} ( x) ) \ \ \textrm{ for } 0 < x < X , $$ $$ \tag{9 } \left .

If $ f ( x , y ) $, $ B ( y) $ and $ D ( y) $ are twice continuously differentiable in $ y $, if the problems (8)–(9) are well-posed, and if $ y _ {0} ( x) $ is sufficiently close to the required solution $ y ( x) $, then

$$ \| y _ {n} - y \| _ {C} \leq K ^ {-} 1 ( K \| y _ {0} - y \| _ {C} ) ^ {2n} , $$

where $ K > 0 $ is a constant and $ \| y \| _ {C} \equiv \max _ {0 \leq x \leq X } | y ( x) | $( see [1]).

The methods mentioned generalize to the more general case of conditions of the form

$$ A ( y ( X _ {0} ) , y ( X _ {1} ) \dots y ( X _ {m} ) ) = 0 , $$

where

$$ A \equiv ( A _ {1} \dots A _ {l} ) ^ {T} ,\ \ 0 = X _ {0} < X _ {1} < \dots < X _ {m} \equiv X . $$

Many important theoretical and applied problems lead to the need of solving non-linear boundary value problems (and related problems) for equations and systems of equations of elliptic type (see, for example, [4][8]). For such a class of problems the basic numerical methods are projection methods (projection-grid, variational-difference, finite element) and difference methods (see [7][17]). Their constructions are in many respects analogous to those in the corresponding methods for linear boundary value problems (see Laplace equation, numerical methods; Poisson equation, numerical methods). However, in these methods the resulting discrete (grid) analogues to boundary value problems, being systems of non-linear equations

$$ \tag{10 } L _ {N} ( \overline{u}\; _ {N} ) = \overline{f}\; _ {N} , $$

where

$$ \overline{u}\; _ {N} \equiv ( u _ {1} \dots u _ {N} ) ^ {T} \ \ \textrm{ and } \ \ \overline{f}\; _ {N} \equiv ( f _ {1} \dots f _ {N} ) ^ {T} , $$

often involve considerable additional difficulties both with respect to the analysis of the systems themselves and the proximity in one sense or another of their solution to that of the original problem, and with respect to the input of numerical work for specifying the systems (10) and the search for their solutions. The specific nature of the non-linearity of a boundary value problem forces one to pay special attention to the choice of Banach spaces and their grid analogues in which the problem itself and its finite-dimensional approximations lend themselves to analysis (see, for example, [5][12], [15]). But the most detailed study, especially of algorithms, has been made only of numerical methods for classes of non-linear problems related in a certain sense to linear problems (weakly non-linear equations, equations with restricted non-linearity), thus making it possible to use the theory of Hilbert spaces and their finite-dimensional analogues (Euclidean spaces).

Let $ H _ {N} \equiv H $ be a Euclidean space of vector-valued functions $ \overline{u}\; _ {N} \equiv u $ with scalar product $ ( u , v ) \equiv \sum _ {i=} 1 ^ {N} u _ {i} v _ {i} $, and let $ l ( H) $ be the set of self-adjoint positive-definite linear operators mapping $ H $ to $ H $; alongside with $ H $ one also uses the Euclidean space $ H _ {B} $, $ B \in l ( H) $, that differs from $ H $ only in the form of scalar product:

$$ ( u , v ) _ {H _ {B} } \equiv \ ( u , v) _ {B} \equiv \ ( B u , v ) ,\ \ \| u \| _ {B} \equiv \ ( B u , u ) ^ {1/2} . $$

Next, let

$$ S _ {B} ( r) \equiv \ \{ {u } : {\| u \| _ {B} \leq r } \} ,\ \ r > 0 . $$

If the operator $ L _ {N} $ is continuous on $ S _ {B} ( r) $ and such that for any $ u $ with $ \| u \| _ {B} = r $,

$$ \tag{11 } ( L _ {N} ( u) - \overline{f}\; _ {N} , u ) \geq 0 , $$

then the system (10) has at least one solution in $ S _ {B} ( r) $. This assertion (see, for example, [9]) is one of the consequences of the classical existence theorems for solutions of non-linear operator equations based on topological principles (see, for example, [14]) and on the monotonicity of the operators (see, for example, [6], [11], [14]). In particular, if $ L _ {N} $ is continuous everywhere and if

$$ \tag{12 } ( L _ {N} ( u) , u ) \geq \ \delta \| u \| _ {B} ^ {2} ,\ \ \delta > 0 , $$

for any $ u $, then the system (10) always has a solution and any one of them belongs to $ S _ {B} ( r) $ with $ r = \delta ^ {-} 1 \| \overline{f}\; _ {N} \| _ {B} ^ {-} 1 $. Moreover, if for any $ u , v , w $, the inequalities

$$ \tag{13 } ( L _ {N} ( u) - L _ {N} ( v) , u- v ) \geq \ \delta _ {0} \| u - v \| _ {B} ^ {2} ,\ \ \delta _ {0} > 0, $$

$$ \tag{14 } | ( L _ {N} ( u) - L _ {N} ( v) , w ) | \leq \ \delta _ {1} \| u - v \| _ {B} \| w \| _ {B} $$

hold, which indicate the strong monotonicity and Lipschitz continuity of $ L _ {N} $, then (10) and its unique solution can be found by the iteration method

$$ \tag{15 } A u ^ {n+} 1 = \ A u ^ {n} - \gamma ( L _ {N} ( u ^ {n} ) - \overline{f}\; _ {N} ) , $$

where $ A \in l ( H) $ and the iteration parameter $ \gamma > 0 $ is determined by constants like $ \delta _ {0} $ and $ \delta _ {1} $, which are obtained when in (13) and (14) the operator $ B $ is replaced by $ A $( see, for example, [8], [9]). Hence, for difference methods estimates of the error result as a consequence of the fact that the system (10) is well-posed and that an approximation of the original boundary value problem by a difference boundary value problem is available. It is frequently expedient to choose the operator $ B $ so that the space $ H _ {B} $ becomes a discrete (grid) analogue of the corresponding Sobolev space (for example, $ W _ {2} ^ {1} ( \Omega ) $ for second-order equations), and so it becomes possible to use either the imbedding theorems themselves (in the case of projection approximations) or their grid analogues (see, for example, [5][9]) for the verification of inequalities like (12)–(14). Projection methods and, in particular, projection-grid (finite-element) methods have the merit that (12)–(14) follow from the corresponding inequalities for differential operators, and the task of estimating the error of the method easily reduces to that of approximating the solutions of the original problem by elements of a selected finite-dimensional subspace (see [6][9], [11]). Instead of the validity of (13)–(14) in the whole space it suffices that they hold, for example, in some ball in $ H _ {B} $ surrounding the required solution. Sometimes useful inequalities are obtained from (12)–(14) when the scalar products on the left-hand sides are replaced by scalar products in $ H _ {B} $ and the norm $ \| z \| _ {B} $ on the right-hand sides by $ \| B z \| $. In a number of problems, on the basis of a priori estimates of their solutions, one can replace them by equivalent boundary value problems for which inequalities like (12)–(14) are satisfied (see [9]).

When (13) and (14) hold, then the iterative method (15) with $ A = B $ converges for any initial approximation, and if the constants $ \delta _ {0} $ and $ \delta _ {1} $ are independent of $ N $, then one can obtain a solution of (10) with accuracy $ \epsilon $ in $ O ( | \mathop{\rm log} \epsilon | ) $ iterations (usually $ \epsilon \approx N ^ \alpha $, $ \alpha < 0 $, that is, $ 0 < c _ {0} \leq \epsilon N ^ {- \alpha } \leq c _ {1} $). At the expense of a successful construction of a grid method one sometimes manages to choose the operator $ B $ so that the solution of the system (10) to within $ N ^ \alpha $ by means of an iteration method like (15) is achieved by performing only $ O ( N \mathop{\rm log} ^ {2} N ) $ or even $ O ( N \mathop{\rm log} N ) $ arithmetic operations (see Minimization of the labour of calculation). The use of a sequence of refining grids makes it possible to reduce these estimates to $ O ( N \mathop{\rm log} N ) $ and $ O ( N) $ operations and so to obtain asymptotically-optimal numerical methods (relative to the labour involved) for the solution of certain elliptic boundary value problems. Here it is assumed that the numerical residual $ L ( u ^ {n} ) - \overline{f}\; _ {N} $ itself requires at most $ K N $ operations. Nevertheless, for certain problems (with large $ K $) it is expedient to reduce the frequency of such computations. This can be achieved provided one can find a more accurate initial approximation by the use of some linearization of $ L _ {N} $ and by methods of the type of continuation with respect to a parameter (see Non-linear equation, numerical methods). Most often linearization methods in conjunction with methods of continuation for non-linear operators with respect to a parameter (cf. Continuation method (to a parametrized family, for non-linear operators)) are used in the most difficult situations, for example, when (13) fails to hold. Certain eigen value problems (see, for example, [9], [12], [14]) can also be considered as non-linear boundary value problems of elliptic type; the construction of the relevant numerical methods is in many respects analogous to the ones already mentioned.

Some other stationary boundary value problems can be treated numerically in the same manner as well (cf. [8], [9], [17]). The numerical methods for non-stationary non-linear boundary value problems include problems for equations and systems of parabolic type, hyperbolic type, mixed type, and others, although discrete analogues of elliptic operators similar to the ones considered above have also been used; however, for them it is even more important to have approximation methods for the individual time layers.

References

[1] N.S. Bakhvalov, "Numerical methods: analysis, algebra, ordinary differential equations" , MIR (1977) (Translated from Russian)
[2] G. Sewell, "The numerical solution of ordinary and partial differential equations" , Acad. Press (1988)
[3] R.E. Bellman, R.E. Kalaba, "Quasilinearization and nonlinear boundary-value problems" , Elsevier (1965) (Translated from Russian)
[4] A.A. Berezovskii, "Lectures on the non-linear boundary value problems of mathematical physics" , 1 , Kiev (1976) (In Russian)
[5] O.A. Ladyzhenskaya, N.N. Ural'tseva, "Linear and quasilinear elliptic equations" , Acad. Press (1968) (Translated from Russian)
[6] J.-L. Lions, "Quelques méthodes de résolution des problèmes aux limites nonlineaires" , Dunod (1969)
[7] R. Glowinski, J.-L. Lions, R. Trémolières, "Numerical analysis of variational inequalities" , North-Holland (1981) (Translated from French)
[8] R. Glowinski, "Numerical methods for nonlinear variational problems" , Springer (1984)
[9] E.G. D'yakonov, "Minimization of computational work. Asymptotically-optimal algorithms" , Moscow (1989) (In Russian)
[10] M.M. Karchevskii, A.D. Lyashko, "Difference schemes for non-linear problems of mathematical physics" , Kazan' (1976) (In Russian)
[11] R.S. Varga, "Functional analysis and approximation theory in numerical analysis" , Reg. Conf. Ser. Appl. Math. , 3 , SIAM (1971)
[12] G. Strang, J. Fix, "An analyse of the finite element method" , Prentice-Hall (1973)
[13] J.T. Oden, "Finite elements of nonlinear continua" , McGraw-Hill (1972)
[14] M.A. Krasnosel'skii, G.M. Vainikko, P.P. Zabreiko, et al., "Approximate solution of operator equations" , Wolters-Noordhoff (1972) (Translated from Russian)
[15] F. Brezzi, T. Rappaz, P. Raviart, "Finite dimensional approximation of nonlinear problems III" Numer. Math. , 38 (1981) pp. 1–30
[16] A.A. Amosov, N.S. Bakhvalov, Yu.I. Osipik, "Iterative processes for the problem of stationary heat exchange in a system of absolutely black bodies" J. Comput. Math. Math. Phys. , 20 : 1 (1980) pp. 110–119 Zh. Vychisl. Mat. Mat. Fiz , 20 : 1 (1980) pp. 104–111
[17] F. Thomasset, "Implementation of finite element methods for Navier–Stokes equations" , Springer (1981)

Comments

References

[a1] A.M. Ascher, "Numerical solution for boundary value problems for ordinary differential equations" , Prentice-Hall (1988)
How to Cite This Entry:
Non-linear boundary value problem, numerical methods. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Non-linear_boundary_value_problem,_numerical_methods&oldid=12079
This article was adapted from an original article by E.G. D'yakonovA.F. Shapkin (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article