Namespaces
Variants
Actions

Difference between revisions of "Non-linear equation, numerical methods"

From Encyclopedia of Mathematics
Jump to: navigation, search
m (Undo revision 47994 by Ulf Rehmann (talk))
Tag: Undo
m (tex encoded by computer)
 
Line 1: Line 1:
 +
<!--
 +
n0671101.png
 +
$#A+1 = 144 n = 0
 +
$#C+1 = 144 : ~/encyclopedia/old_files/data/N067/N.0607110 Non\AAhlinear equation, numerical methods
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
Iteration methods for the solution of non-linear equations.
 
Iteration methods for the solution of non-linear equations.
  
 
By a non-linear equation one means (see [[#References|[1]]]–[[#References|[3]]]) an algebraic or transcendental equation of the form
 
By a non-linear equation one means (see [[#References|[1]]]–[[#References|[3]]]) an algebraic or transcendental equation of the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n0671101.png" /></td> <td valign="top" style="width:5%;text-align:right;">(1)</td></tr></table>
+
$$ \tag{1 }
 +
\phi ( x)  = 0,
 +
$$
 +
 
 +
where  $  x $
 +
is a real variable and  $  \phi ( x) $
 +
a non-linear function, and by a system of non-linear equations a system of the form
 +
 
 +
$$ \tag{2 }
 +
\left .
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n0671102.png" /> is a real variable and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n0671103.png" /> a non-linear function, and by a system of non-linear equations a system of the form
+
\begin{array}{c}
 +
\phi _ {1} ( x _ {1} \dots x _ {N} )  = 0 ,  \\
 +
{\dots \dots \dots }  \\
 +
\phi _ {N} ( x _ {1} \dots x _ {N} )  = 0 , \\
 +
\end{array}
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n0671104.png" /></td> <td valign="top" style="width:5%;text-align:right;">(2)</td></tr></table>
+
\right \}
 +
$$
  
that is not linear; the solutions of (2) are <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n0671105.png" />-dimensional vectors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n0671106.png" />. Equation (1) and system (2) can be treated as a non-linear operator equation:
+
that is not linear; the solutions of (2) are $  N $-
 +
dimensional vectors $  x = ( x _ {1} \dots x _ {N} ) $.  
 +
Equation (1) and system (2) can be treated as a non-linear operator equation:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n0671107.png" /></td> <td valign="top" style="width:5%;text-align:right;">(3)</td></tr></table>
+
$$ \tag{3 }
 +
L ( u) =  f
 +
$$
  
with a non-linear operator <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n0671108.png" /> acting from a finite-dimensional vector subspace <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n0671109.png" /> into <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711010.png" />.
+
with a non-linear operator $  L $
 +
acting from a finite-dimensional vector subspace $  H _ {N} $
 +
into $  H _ {N} $.
  
Numerical methods for the solution of a non-linear equation (3) are called iteration methods if they are defined by the transition from a known approximation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711011.png" /> at the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711012.png" />-th iteration to a new iteration <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711013.png" /> and allow one to find in a sufficiently large number of iterations a solution of (3) within prescribed accuracy <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711014.png" />. The most important iteration methods for the approximate solution of (3), both of a general and of a special form, characteristic for discrete (grid) methods for solving boundary value problems for equations and systems of partial differential equations of strongly-elliptic type, are the object of study of the present article. Non-linear operator equations connected with the discussion of infinite-dimensional spaces (see, for example [[#References|[4]]]–[[#References|[8]]]) are a very broad mathematical concept, including as special cases, for example, non-linear integral equations and non-linear boundary value problems. Numerical methods for the approximate solution of them include also methods for their approximation by finite-dimensional equations; these methods are treated separately.
+
Numerical methods for the solution of a non-linear equation (3) are called iteration methods if they are defined by the transition from a known approximation $  u  ^ {n} $
 +
at the n $-
 +
th iteration to a new iteration $  u  ^ {n+} 1 $
 +
and allow one to find in a sufficiently large number of iterations a solution of (3) within prescribed accuracy $  \epsilon $.  
 +
The most important iteration methods for the approximate solution of (3), both of a general and of a special form, characteristic for discrete (grid) methods for solving boundary value problems for equations and systems of partial differential equations of strongly-elliptic type, are the object of study of the present article. Non-linear operator equations connected with the discussion of infinite-dimensional spaces (see, for example [[#References|[4]]]–[[#References|[8]]]) are a very broad mathematical concept, including as special cases, for example, non-linear integral equations and non-linear boundary value problems. Numerical methods for the approximate solution of them include also methods for their approximation by finite-dimensional equations; these methods are treated separately.
  
 
One of the most important methods for solving an equation (3) is the simple iteration method (successive substitution), which assumes that one can replace (3) by an equivalent system
 
One of the most important methods for solving an equation (3) is the simple iteration method (successive substitution), which assumes that one can replace (3) by an equivalent system
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711015.png" /></td> <td valign="top" style="width:5%;text-align:right;">(4)</td></tr></table>
+
$$ \tag{4 }
 +
= P ( u) ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711016.png" /> is an element of a finite-dimensional normed space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711017.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711018.png" />, mapping <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711019.png" /> into <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711020.png" />, is a contractive operator:
+
where $  u = ( u _ {1} \dots u _ {N} ) $
 +
is an element of a finite-dimensional normed space $  H _ {N} $
 +
and $  P $,  
 +
mapping $  H _ {N} $
 +
into $  H _ {N} $,  
 +
is a contractive operator:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711021.png" /></td> <td valign="top" style="width:5%;text-align:right;">(5)</td></tr></table>
+
$$ \tag{5 }
 +
\exists q < 1 \
 +
\forall u , v \
 +
\| P ( u) - P ( v) \|  \leq  \
 +
q  \| u - v \| .
 +
$$
  
 
Then by a general contractive-mapping principle (see [[#References|[1]]]–[[#References|[4]]] and [[Contracting-mapping principle|Contracting-mapping principle]]) equation (1) has a unique solution, the method of simple iteration
 
Then by a general contractive-mapping principle (see [[#References|[1]]]–[[#References|[4]]] and [[Contracting-mapping principle|Contracting-mapping principle]]) equation (1) has a unique solution, the method of simple iteration
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711022.png" /></td> <td valign="top" style="width:5%;text-align:right;">(6)</td></tr></table>
+
$$ \tag{6 }
 +
u  ^ {n+} 1  = P ( u  ^ {n} ) ,\ \
 +
n = 0 , 1 \dots
 +
$$
 +
 
 +
converges for any initial approximation  $  u  ^ {0} $,
 +
and the error  $  z  ^ {n} $
 +
at the  $  n $-
 +
th iteration satisfies the estimate
 +
 
 +
$$
 +
\| z  ^ {n} \|  \leq  \
 +
q  ^ {n} ( 1- q ) ^ {-} 1 \| u  ^ {0} - u  ^ {1} \| .
 +
$$
  
converges for any initial approximation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711023.png" />, and the error <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711024.png" /> at the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711025.png" />-th iteration satisfies the estimate
+
Suppose that some solution  $  \overline{u}\; $
 +
of (3) has a surrounding ball  $  S ( \overline{u}\; , R ) \equiv \{ {v } : {\| v - \overline{u}\; \| \leq  R } \} $
 +
such that the system (3) considered together with the additional condition
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711026.png" /></td> </tr></table>
+
$$ \tag{7 }
 +
u  \in  S ( \overline{u}\; , R )
 +
$$
  
Suppose that some solution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711027.png" /> of (3) has a surrounding ball <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711028.png" /> such that the system (3) considered together with the additional condition
+
is equivalent to (4) considered together with (7), and that (5) holds for  $  u = \overline{u}\; $
 +
and any  $  v = \overline{u}\; + z $
 +
with  $  \| z \| \leq  R $.  
 +
Then for a choice of an initial approximation  $  u  ^ {0} $
 +
from  $  S ( \overline{u}\; , R ) $
 +
in the method (6) convergence of  $  u  ^ {n} $
 +
to  $  \overline{u}\; $
 +
with an error estimate  $  \| z  ^ {n} \| \leq  q  ^ {n} R $
 +
is guaranteed.
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711029.png" /></td> <td valign="top" style="width:5%;text-align:right;">(7)</td></tr></table>
+
For twice continuously-differentiable functions  $  \phi _ {i} $,
 +
if a good initial approximation to a solution of the system (2) is available, frequently an effective means of improving the accuracy is the Newton–Kantorovich method, in which an equation  $  \phi _ {1} ( x _ {1} \dots x _ {N} ) = 0 $
 +
from (2) determining a certain surface  $  P _ {i} $
 +
is replaced by the equation of the tangent surface to  $  P _ {i} $
 +
at the point  $  x  ^ {n} $,
 +
where  $  x  ^ {n} $
 +
is a previously obtained approximation to a solution of (2) (see [[#References|[1]]]–[[#References|[5]]]). Under certain additional conditions the Newton–Kantorovich method leads to an error estimate of the form
  
is equivalent to (4) considered together with (7), and that (5) holds for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711030.png" /> and any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711031.png" /> with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711032.png" />. Then for a choice of an initial approximation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711033.png" /> from <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711034.png" /> in the method (6) convergence of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711035.png" /> to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711036.png" /> with an error estimate <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711037.png" /> is guaranteed.
+
$$
 +
\| z  ^ {n} \|  \leq  \
 +
c _ {1} ( c  \| z  ^ {0} \| ) ^ {2  ^ {n} } ,
 +
$$
  
For twice continuously-differentiable functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711038.png" />, if a good initial approximation to a solution of the system (2) is available, frequently an effective means of improving the accuracy is the Newton–Kantorovich method, in which an equation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711039.png" /> from (2) determining a certain surface <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711040.png" /> is replaced by the equation of the tangent surface to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711041.png" /> at the point <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711042.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711043.png" /> is a previously obtained approximation to a solution of (2) (see [[#References|[1]]]–[[#References|[5]]]). Under certain additional conditions the Newton–Kantorovich method leads to an error estimate of the form
+
where  $  c _ {i} $
 +
and  $  c $
 +
are certain constants. At every iteration of this method one has to solve the system of linear algebraic equations with matrix
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711044.png" /></td> </tr></table>
+
$$
 +
A  ^ {(} n)  \equiv \
 +
\left . \left (
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711045.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711046.png" /> are certain constants. At every iteration of this method one has to solve the system of linear algebraic equations with matrix
+
\frac{\partial  \phi _ {i} }{\partial  x _ {j} }
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711047.png" /></td> </tr></table>
+
\right ) \right | _ {x = x  ^ {n}  } .
 +
$$
  
Sometimes one keeps this matrix fixed for several iterations, sometimes one replaces the derivatives <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711048.png" /> by difference approximations.
+
Sometimes one keeps this matrix fixed for several iterations, sometimes one replaces the derivatives $  \partial  \phi _ {i} / \partial  x _ {j} $
 +
by difference approximations.
  
 
The Newton–Kantorovich method belongs to the group of [[Linearization methods|linearization methods]] (3). Another method in this group is the [[Secant method|secant method]].
 
The Newton–Kantorovich method belongs to the group of [[Linearization methods|linearization methods]] (3). Another method in this group is the [[Secant method|secant method]].
  
A large number of iteration methods (the so-called descent methods) (see [[#References|[1]]]–[[#References|[3]]], [[#References|[9]]], [[#References|[10]]]–[[#References|[13]]]) are based on replacing the solving of the equations (3) by minimizing a certain functional <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711049.png" /> (cf. also [[Descent, method of|Descent, method of]]). For example, for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711050.png" /> one can take
+
A large number of iteration methods (the so-called descent methods) (see [[#References|[1]]]–[[#References|[3]]], [[#References|[9]]], [[#References|[10]]]–[[#References|[13]]]) are based on replacing the solving of the equations (3) by minimizing a certain functional $  I ( u) $(
 +
cf. also [[Descent, method of|Descent, method of]]). For example, for $  I ( u) $
 +
one can take
 +
 
 +
$$ \tag{8 }
 +
I ( u)  \equiv  \| L ( u) - f \|  ^ {2} .
 +
$$
 +
 
 +
In a number of cases when the initial non-linear equations are the Euler equations for the problem of minimizing a certain functional  $  I ( u) $,
 +
such a variational formulation of the problem is even more natural; the operators  $  L $
 +
in similar situations are gradients of the functional  $  I ( u) $
 +
and are called potential operators (see [[#References|[5]]]–[[#References|[6]]]). Among the several versions of descent methods one can mention the methods of coordinate-wise descent, several gradient methods and, in particular, the method of steepest descent, the method of conjugate gradients, and others, and also their modifications (see [[#References|[2]]], [[#References|[9]]], [[#References|[10]]]–[[#References|[13]]]). A number of iteration methods for the solution of the equations (3), describing a certain stationary state, can be treated as discretizations of the corresponding non-stationary problems. Therefore, methods of this class are called stabilization methods (cf. [[Adjustment method|Adjustment method]] and, for example, [[#References|[2]]]). Examples of such non-stationary problems are problems described by a system of ordinary differential equations
 +
 
 +
$$
 +
A _ {1}
 +
\frac{d  ^ {2} u }{d t  ^ {2} }
 +
+
 +
A _ {2}
 +
\frac{d u }{d t }
 +
+
 +
L ( u) - f  =  0 .
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711051.png" /></td> <td valign="top" style="width:5%;text-align:right;">(8)</td></tr></table>
+
The introduction of an additional independent variable is characteristic for the method of differentiation with respect to a parameter (a continuation method, see [[#References|[5]]], [[#References|[13]]]). Its essence consists in the introduction of an auxiliary parameter  $  \lambda \in [ 0 , 1 ] $,
 +
the choice of continuously-differentiable functions  $  F _ {i} ( x , \lambda ) $,
 +
and replacement of (2) by the system
  
In a number of cases when the initial non-linear equations are the Euler equations for the problem of minimizing a certain functional <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711052.png" />, such a variational formulation of the problem is even more natural; the operators <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711053.png" /> in similar situations are gradients of the functional <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711054.png" /> and are called potential operators (see [[#References|[5]]]–[[#References|[6]]]). Among the several versions of descent methods one can mention the methods of coordinate-wise descent, several gradient methods and, in particular, the method of steepest descent, the method of conjugate gradients, and others, and also their modifications (see [[#References|[2]]], [[#References|[9]]], [[#References|[10]]]–[[#References|[13]]]). A number of iteration methods for the solution of the equations (3), describing a certain stationary state, can be treated as discretizations of the corresponding non-stationary problems. Therefore, methods of this class are called stabilization methods (cf. [[Adjustment method|Adjustment method]] and, for example, [[#References|[2]]]). Examples of such non-stationary problems are problems described by a system of ordinary differential equations
+
$$ \tag{9 }
 +
F _ {i} ( x , \lambda )  = 0 ,\ \
 +
i = 1 \dots N ,\ \
 +
0 \leq  \lambda \leq  1 ;
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711055.png" /></td> </tr></table>
+
for  $  \lambda = 0 $
 +
the system (9) must be easy to solve, and the functions  $  F _ {i} ( x , 1 ) $
 +
must coincide with the  $  \phi _ {i} ( x) $,
 +
$  i = 1 \dots N $.  
 +
Generally speaking, the system (9) determines
  
The introduction of an additional independent variable is characteristic for the method of differentiation with respect to a parameter (a continuation method, see [[#References|[5]]], [[#References|[13]]]). Its essence consists in the introduction of an auxiliary parameter <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711056.png" />, the choice of continuously-differentiable functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711057.png" />, and replacement of (2) by the system
+
$$
 +
x ( \lambda ) \equiv \
 +
( x _ {1} ( \lambda ) \dots x _ {N} ( \lambda ) )
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711058.png" /></td> <td valign="top" style="width:5%;text-align:right;">(9)</td></tr></table>
+
as a function of  $  \lambda $,
 +
and the required solution of (2) is  $  x ( 1) $.  
 +
If (9) is differentiable with respect to  $  \lambda $,
 +
the result is a system of ordinary differential equations
  
for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711059.png" /> the system (9) must be easy to solve, and the functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711060.png" /> must coincide with the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711061.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711062.png" />. Generally speaking, the system (9) determines
+
$$ \tag{10 }
 +
\sum _ { i= } 1 ^ { N }
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711063.png" /></td> </tr></table>
+
\frac{\partial  F _ {i} }{\partial  x _ {j} }
  
as a function of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711064.png" />, and the required solution of (2) is <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711065.png" />. If (9) is differentiable with respect to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711066.png" />, the result is a system of ordinary differential equations
+
\frac{d x _ {j} }{d \lambda }
 +
+
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711067.png" /></td> <td valign="top" style="width:5%;text-align:right;">(10)</td></tr></table>
+
\frac{\partial  F _ {i} }{\partial  \lambda }
 +
  = 0 ,\ \
 +
i = 1 \dots N .
 +
$$
  
If one solves the Cauchy problem for it on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711068.png" /> with initial conditions that are solutions of the system
+
If one solves the Cauchy problem for it on 0 \leq  \lambda \leq  1 $
 +
with initial conditions that are solutions of the system
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711069.png" /></td> </tr></table>
+
$$
 +
F _ {i} ( x , 0 )  = 0 ,\ \
 +
i = 1 \dots N ,
 +
$$
  
then one finds a solution of (2). Discretization of (10) with respect to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711070.png" /> leads to a numerical method for solving the system (2).
+
then one finds a solution of (2). Discretization of (10) with respect to $  \lambda $
 +
leads to a numerical method for solving the system (2).
  
In the method of continuation with respect to a parameter (cf. [[Continuation method (to a parametrized family, for non-linear operators)|Continuation method (to a parametrized family, for non-linear operators)]]) the system (9) is solved for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711071.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711072.png" />, and for every one of these values of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711073.png" /> one applies a certain iteration method with an initial approximation that agrees with the approximation obtained by solving the system for the preceding value of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711074.png" />. Both these methods are in essence iteration methods for solving (2) with a special procedure for finding a good initial approximation.
+
In the method of continuation with respect to a parameter (cf. [[Continuation method (to a parametrized family, for non-linear operators)|Continuation method (to a parametrized family, for non-linear operators)]]) the system (9) is solved for $  \lambda = \tau \dots m \tau $,  
 +
$  m \tau = 1 $,  
 +
and for every one of these values of $  \lambda $
 +
one applies a certain iteration method with an initial approximation that agrees with the approximation obtained by solving the system for the preceding value of $  \lambda $.  
 +
Both these methods are in essence iteration methods for solving (2) with a special procedure for finding a good initial approximation.
  
 
In the case of systems the problem of localizing the solution causes great difficulty. Since the majority of iteration methods converge only in the presence of a fairly good approximation to a solution, the two methods described above often make it possible to dispense with the need to localize a solution directly. For localization one also frequently uses theorems based on topological principles and on the monotonicity of operators (see [[#References|[4]]]–[[#References|[8]]]).
 
In the case of systems the problem of localizing the solution causes great difficulty. Since the majority of iteration methods converge only in the presence of a fairly good approximation to a solution, the two methods described above often make it possible to dispense with the need to localize a solution directly. For localization one also frequently uses theorems based on topological principles and on the monotonicity of operators (see [[#References|[4]]]–[[#References|[8]]]).
Line 83: Line 222:
 
For the solution of equations (1), which are the simplest special cases of (3), the number of known iteration methods applicable in practice is very large (see, for example, [[#References|[1]]]–[[#References|[3]]], [[#References|[12]]], [[#References|[14]]]). Apart from the ones already considered one can mention, for example, the iteration methods of higher order (see [[#References|[1]]], [[#References|[14]]]), which include Newton's methods as a special case, and the many iteration methods especially oriented to finding real or complex roots of polynomials
 
For the solution of equations (1), which are the simplest special cases of (3), the number of known iteration methods applicable in practice is very large (see, for example, [[#References|[1]]]–[[#References|[3]]], [[#References|[12]]], [[#References|[14]]]). Apart from the ones already considered one can mention, for example, the iteration methods of higher order (see [[#References|[1]]], [[#References|[14]]]), which include Newton's methods as a special case, and the many iteration methods especially oriented to finding real or complex roots of polynomials
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711075.png" /></td> </tr></table>
+
$$
 +
p _ {n} ( z)  \equiv \
 +
a _ {n} z  ^ {n} + a _ {n-} 1 z  ^ {n-} 1 + \dots + a _ {0} ,
 +
$$
  
where the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711076.png" /> are real or complex numbers (see [[#References|[1]]], [[#References|[2]]]).
+
where the $  a _ {i} $
 +
are real or complex numbers (see [[#References|[1]]], [[#References|[2]]]).
  
The problem of localizing the solutions of (1) reduces to a search for an interval at the ends of which the continuous function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711077.png" /> takes values of opposite sign. When <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711078.png" /> is a polynomial, the task is less subtle, since theoretical bounds are known (see [[#References|[1]]]) and there are methods for finding all roots with required accuracy without giving good initial approximations to them (see [[#References|[12]]]).
+
The problem of localizing the solutions of (1) reduces to a search for an interval at the ends of which the continuous function $  \phi $
 +
takes values of opposite sign. When $  \phi $
 +
is a polynomial, the task is less subtle, since theoretical bounds are known (see [[#References|[1]]]) and there are methods for finding all roots with required accuracy without giving good initial approximations to them (see [[#References|[12]]]).
  
 
Iteration methods for solving equations (3) that arise in grid analogues of non-linear boundary value problems for partial differential equations are special cases of methods for solving grid systems (see, for example, [[#References|[13]]], [[#References|[15]]]–[[#References|[19]]]). Probably one of the most intensively applied methods for solving (3) is a modified method of simple iteration, which can be can be written in the form
 
Iteration methods for solving equations (3) that arise in grid analogues of non-linear boundary value problems for partial differential equations are special cases of methods for solving grid systems (see, for example, [[#References|[13]]], [[#References|[15]]]–[[#References|[19]]]). Probably one of the most intensively applied methods for solving (3) is a modified method of simple iteration, which can be can be written in the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711079.png" /></td> <td valign="top" style="width:5%;text-align:right;">(11)</td></tr></table>
+
$$ \tag{11 }
 +
B u  ^ {n+} 1  = B u  ^ {n} - \gamma ( L ( u  ^ {n} ) - ) ,
 +
$$
  
where (3) is regarded as an operator equation in the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711080.png" />-dimensional space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711081.png" />, and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711082.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711083.png" /> stands for the set of symmetric positive linear operators mapping <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711084.png" /> into itself. An expedient study of such methods should proceed not in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711085.png" />, but in the space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711086.png" /> with the new scalar product
+
where (3) is regarded as an operator equation in the $  N $-
 +
dimensional space $  H _ {N} = H $,  
 +
and $  B \in S $,  
 +
where $  S $
 +
stands for the set of symmetric positive linear operators mapping $  H $
 +
into itself. An expedient study of such methods should proceed not in $  H $,  
 +
but in the space $  H _ {B} $
 +
with the new scalar product
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711087.png" /></td> </tr></table>
+
$$
 +
( u , v ) _ {H _ {B}  }  \equiv \
 +
( B u , v ) ,\ \
 +
\| u \|  ^ {2}  \equiv \
 +
( B u , u ) ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711088.png" /> is the scalar product in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711089.png" />.
+
where $  ( u , v ) $
 +
is the scalar product in $  H $.
  
If the operator <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711090.png" /> satisfies the conditions of strict monotonicity and is Lipschitz continuous:
+
If the operator $  L $
 +
satisfies the conditions of strict monotonicity and is Lipschitz continuous:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711091.png" /></td> <td valign="top" style="width:5%;text-align:right;">(12)</td></tr></table>
+
$$ \tag{12 }
 +
\forall u , v \
 +
( L ( u) - L ( v) , u - v )
 +
\geq  \delta _ {0} \| u - v \| _ {B}  ^ {2} ,\ \
 +
\delta _ {0} > 0 ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711092.png" /></td> <td valign="top" style="width:5%;text-align:right;">(13)</td></tr></table>
+
$$ \tag{13 }
 +
\forall u , v \
 +
\| L ( u) - L ( v) \| {B ^ {- 1 } }  ^ {2}  \leq  \delta _ {2} \| u - v \| _ {B}  ^ {2} ,
 +
$$
  
then (3) has a unique solution and the method (11), for a suitable choice of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711093.png" />, converges for any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711094.png" /> with error estimate
+
then (3) has a unique solution and the method (11), for a suitable choice of $  \gamma $,  
 +
converges for any $  u  ^ {0} $
 +
with error estimate
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711095.png" /></td> <td valign="top" style="width:5%;text-align:right;">(14)</td></tr></table>
+
$$ \tag{14 }
 +
\| z  ^ {n} \| _ {B}  \leq  q  ^ {n}
 +
\| z  ^ {0} \| _ {B} ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711096.png" /> (see [[#References|[13]]], [[#References|[15]]]).
+
where $  q \equiv q ( \delta _ {0} , \delta _ {2} , \gamma ) < 1 $(
 +
see [[#References|[13]]], [[#References|[15]]]).
  
In the most general version of this theorem it suffices to require that (12) and (13) hold for a solution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711097.png" /> and all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711098.png" /> in a ball <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n06711099.png" /> and that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n067110100.png" /> lies in this ball (see [[#References|[13]]]). In this case the constants <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n067110101.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n067110102.png" /> may depend on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n067110103.png" />. To verify these conditions it suffices, e.g., to localize <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n067110104.png" /> by means of a priori estimates, obtaining <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n067110105.png" />, and then to take <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n067110106.png" />, provided that (12) and (13) hold for any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n067110107.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n067110108.png" /> in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n067110109.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n067110110.png" />. The constant <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n067110111.png" /> in (14) can be diminished if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n067110112.png" /> is differentiable and for its derivatives <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n067110113.png" />, represented as the sum of the symmetric part <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n067110114.png" /> and the skew-symmetric part <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n067110115.png" /> the inequalities
+
In the most general version of this theorem it suffices to require that (12) and (13) hold for a solution $  u $
 +
and all $  v $
 +
in a ball $  S _ {B} ( u , R ) \equiv \{ {v } : {\| v - u \| \leq  R } \} $
 +
and that $  u  ^ {0} $
 +
lies in this ball (see [[#References|[13]]]). In this case the constants $  \delta _ {0} $
 +
and $  \delta _ {2} $
 +
may depend on $  R $.  
 +
To verify these conditions it suffices, e.g., to localize $  u $
 +
by means of a priori estimates, obtaining $  \| u \| _ {B} \leq  R _ {0} $,  
 +
and then to take $  S _ {B} ( u , R ) \equiv S _ {B} ( u , R _ {1} - R _ {0} ) $,  
 +
provided that (12) and (13) hold for any $  u $
 +
and $  v $
 +
in $  S _ {B} ( 0 , R _ {1} ) $,
 +
where $  R _ {1} > R _ {0} $.  
 +
The constant $  q $
 +
in (14) can be diminished if $  L $
 +
is differentiable and for its derivatives $  L _ {v}  ^  \prime  $,  
 +
represented as the sum of the symmetric part $  L _ {v ,  \mathop{\rm sym}  }  ^  \prime  $
 +
and the skew-symmetric part $  L _ {v , \textrm{ skew } }  ^  \prime  $
 +
the inequalities
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n067110116.png" /></td> </tr></table>
+
$$
 +
\forall v \in S _ {B} ( u , R ) \  \sigma _ {0} B  \leq  L _ {v ,  \mathop{\rm sym}  }  ^  \prime  \leq  \sigma _ {1} B
 +
,\  \sigma _ {0} > 0 ;
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n067110117.png" /></td> </tr></table>
+
$$
 +
\forall w  \| L _ {v , \textrm{ skew } }  ^  \prime  w \| _ {B ^
 +
{- 1 }  }  ^ {2}  \leq  \sigma _ {2} \| w \| _ {B}  ^ {2}
 +
$$
  
are known. Then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n067110118.png" /> (see [[#References|[11]]], [[#References|[13]]], [[#References|[15]]]). Sometimes in the discussion of certain types of non-linearity it is advisable to use instead of (12) and (13) inequalities like
+
are known. Then $  q \equiv q ( \gamma , \sigma _ {0} , \sigma _ {1} , \sigma _ {2} ) $(
 +
see [[#References|[11]]], [[#References|[13]]], [[#References|[15]]]). Sometimes in the discussion of certain types of non-linearity it is advisable to use instead of (12) and (13) inequalities like
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n067110119.png" /></td> </tr></table>
+
$$
 +
( L ( u) - L ( v) , B ( u - v ) )
 +
\geq  \overline \delta \; _ {0} \| B ( u - v ) \|  ^ {2} ,\ \
 +
\overline \delta \; _ {0} > 0 ;
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n067110120.png" /></td> </tr></table>
+
$$
 +
\| L ( u) - L ( v) \|  ^ {2}  \leq  \overline \delta \; _ {2} \| B ( u - v ) \|  ^ {2}
 +
$$
  
(see [[#References|[13]]]). For the operators <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n067110121.png" /> in (11) one can use, for example, splitting difference operators (the method of alternating directions) or factorized difference operators (the alternating-triangle method, the incomplete matrix factorization method), and others. Most attractive from the asymptotic point of view is the use of operators <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n067110122.png" /> such that the constants <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n067110123.png" /> or <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n067110124.png" /> do not depend on the dimension of the space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n067110125.png" /> (see [[#References|[13]]]) and the operators <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n067110126.png" /> are sufficiently simple. Along these lines one succeeds in a number of cases in constructing iteration methods that make it possible to find a solution of (3) with accuracy <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n067110127.png" /> at the expense of altogether <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n067110128.png" /> (or even <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n067110129.png" />) arithmetic operations for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n067110130.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n067110131.png" /> (see [[#References|[13]]]), if the computational work in the evaluation of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n067110132.png" /> for a given <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n067110133.png" /> can be estimated by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n067110134.png" />. To verify conditions of the type (12) and (13), in many cases the grid (difference) analogues of the Sobolev imbedding theorems turn out to be very powerful (see [[#References|[13]]]). It is important to take into account the specific nature of the non-linearity. For example, when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n067110135.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n067110136.png" /> is a positive linear operator and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n067110137.png" /> a quadratic non-linear operator having the property of  "skew symmetryskew-symmetry"  (that is, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n067110138.png" /> for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n067110139.png" />), one often succeeds in obtaining the constant <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n067110140.png" /> in any ball <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n067110141.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n067110142.png" /> depending only on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n067110143.png" />; then (11) converges for any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n067/n067110/n067110144.png" /> (see [[#References|[13]]]). In a number of cases one can replace the original problem on the basis of a priori estimates by an equivalent one for which the required conditions hold in the whole space.
+
(see [[#References|[13]]]). For the operators $  B $
 +
in (11) one can use, for example, splitting difference operators (the method of alternating directions) or factorized difference operators (the alternating-triangle method, the incomplete matrix factorization method), and others. Most attractive from the asymptotic point of view is the use of operators $  B $
 +
such that the constants $  \delta _ {i} $
 +
or $  \overline \delta \; _ {i} $
 +
do not depend on the dimension of the space $  H _ {N} $(
 +
see [[#References|[13]]]) and the operators $  B $
 +
are sufficiently simple. Along these lines one succeeds in a number of cases in constructing iteration methods that make it possible to find a solution of (3) with accuracy $  \epsilon $
 +
at the expense of altogether $  O ( N  \mathop{\rm log}  N  |  \mathop{\rm log}  \epsilon | ) $(
 +
or even $  O ( N  \mathop{\rm log}  N ) $)  
 +
arithmetic operations for $  \epsilon \approx N ^ {- \alpha } $,
 +
$  \alpha > 0 $(
 +
see [[#References|[13]]]), if the computational work in the evaluation of $  L ( u  ^ {n} ) $
 +
for a given $  u  ^ {n} $
 +
can be estimated by $  O ( N) $.  
 +
To verify conditions of the type (12) and (13), in many cases the grid (difference) analogues of the Sobolev imbedding theorems turn out to be very powerful (see [[#References|[13]]]). It is important to take into account the specific nature of the non-linearity. For example, when $  L ( u) = \Lambda u + Pu $,  
 +
where $  \Lambda $
 +
is a positive linear operator and $  P $
 +
a quadratic non-linear operator having the property of  "skew symmetryskew-symmetry"  (that is, $  ( P u , u ) = 0 $
 +
for all $  u $),  
 +
one often succeeds in obtaining the constant $  \delta _ {2} ( R) $
 +
in any ball $  S _ {B} ( 0 , R ) $
 +
and $  \delta _ {0} $
 +
depending only on $  \| u \| _ {B} $;  
 +
then (11) converges for any $  u  ^ {0} $(
 +
see [[#References|[13]]]). In a number of cases one can replace the original problem on the basis of a priori estimates by an equivalent one for which the required conditions hold in the whole space.
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  I.S. Berezin,  N.P. Zhidkov,  "Computing methods" , Pergamon  (1973)  (Translated from Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  N.S. Bakhvalov,  "Numerical methods: analysis, algebra, ordinary differential equations" , MIR  (1977)  (Translated from Russian)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  J.M. Ortega,  W.C. Rheinboldt,  "Iterative solution of non-linear equations in several variables" , Acad. Press  (1970)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  M.A. Krasnosel'skii,  G.M. Vainikko,  P.P. Zabreiko,  et al.,  "Approximate solution of operator equations" , Wolters-Noordhoff  (1972)  (Translated from Russian)</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top">  S.G. Mikhlin,  "The numerical performance of variational methods" , Wolters-Noordhoff  (1971)  (Translated from Russian)</TD></TR><TR><TD valign="top">[6]</TD> <TD valign="top">  M.M. Vainberg,  "Variational method and method of monotone operators in the theory of nonlinear equations" , Wiley  (1973)  (Translated from Russian)</TD></TR><TR><TD valign="top">[7]</TD> <TD valign="top">  H. Gajewski,  K. Gröger,  K. Zacharias,  "Nichtlineare Operatorengleichungen und Operatorendifferentialgleichungen" , Akademie Verlag  (1974)</TD></TR><TR><TD valign="top">[8]</TD> <TD valign="top">  J.-L. Lions,  "Quelques méthodes de résolution des problèmes aux limites nonlineaires" , Dunod  (1969)</TD></TR><TR><TD valign="top">[9]</TD> <TD valign="top">  F.P. Vasil'ev,  "Lectures on methods for solving extremal problems" , Moscow  (1974)  (In Russian)</TD></TR><TR><TD valign="top">[10]</TD> <TD valign="top">  B.N. Pshenichnyi,  Yu.M. Danilin,  "Numerical methods in extremal problems" , MIR  (1978)  (Translated from Russian)</TD></TR><TR><TD valign="top">[11]</TD> <TD valign="top">  R. Glowinski,  "Numerical methods for nonlinear variational problems" , Springer  (1984)</TD></TR><TR><TD valign="top">[12]</TD> <TD valign="top">  V.V. Voevodin,  "Numerical methods of algebra. Theory and algorithms" , Moscow  (1966)  (In Russian)</TD></TR><TR><TD valign="top">[13]</TD> <TD valign="top">  E.G. D'yakonov,  "Minimization of computational work. Asymptotically-optimal algorithms" , Moscow  (1989)  (In Russian)</TD></TR><TR><TD valign="top">[14]</TD> <TD valign="top">  J.F. Traub,  "Iterative methods for the solution of equations" , Prentice-Hall  (1964)</TD></TR><TR><TD valign="top">[15]</TD> <TD valign="top">  A.A. Samarskii,  E.S. Nikolaev,  "Numerical methods for grid equations" , '''1–2''' , Birkhäuser  (1989)  (Translated from Russian)</TD></TR><TR><TD valign="top">[16]</TD> <TD valign="top">  R. Glowinski,  J.-L. Lions,  R. Trémolières,  "Numerical analysis of variational inequalities" , North-Holland  (1981)  (Translated from French)</TD></TR><TR><TD valign="top">[17]</TD> <TD valign="top">  W. Hackbusch,  "Multigrid solution of continuation problems"  R. Ansorge (ed.)  Th. Meis (ed.)  W. Törnig (ed.) , ''Iterative solution of nonlinear systems of equations'' , ''Lect. notes in math.'' , '''953''' , Springer  (1982)  pp. 20–44</TD></TR><TR><TD valign="top">[18]</TD> <TD valign="top">  F. Thomasset,  "Implementation of finite element methods for Navier–Stokes equations" , Springer  (1981)</TD></TR><TR><TD valign="top">[19]</TD> <TD valign="top">  W. Hackbusch (ed.)  V. Trottenberg (ed.) , ''Multigrid Methods. Proc. Köln-Porz, 1981'' , ''Lect. notes in math.'' , '''960''' , Springer  (1982)</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  I.S. Berezin,  N.P. Zhidkov,  "Computing methods" , Pergamon  (1973)  (Translated from Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  N.S. Bakhvalov,  "Numerical methods: analysis, algebra, ordinary differential equations" , MIR  (1977)  (Translated from Russian)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  J.M. Ortega,  W.C. Rheinboldt,  "Iterative solution of non-linear equations in several variables" , Acad. Press  (1970)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  M.A. Krasnosel'skii,  G.M. Vainikko,  P.P. Zabreiko,  et al.,  "Approximate solution of operator equations" , Wolters-Noordhoff  (1972)  (Translated from Russian)</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top">  S.G. Mikhlin,  "The numerical performance of variational methods" , Wolters-Noordhoff  (1971)  (Translated from Russian)</TD></TR><TR><TD valign="top">[6]</TD> <TD valign="top">  M.M. Vainberg,  "Variational method and method of monotone operators in the theory of nonlinear equations" , Wiley  (1973)  (Translated from Russian)</TD></TR><TR><TD valign="top">[7]</TD> <TD valign="top">  H. Gajewski,  K. Gröger,  K. Zacharias,  "Nichtlineare Operatorengleichungen und Operatorendifferentialgleichungen" , Akademie Verlag  (1974)</TD></TR><TR><TD valign="top">[8]</TD> <TD valign="top">  J.-L. Lions,  "Quelques méthodes de résolution des problèmes aux limites nonlineaires" , Dunod  (1969)</TD></TR><TR><TD valign="top">[9]</TD> <TD valign="top">  F.P. Vasil'ev,  "Lectures on methods for solving extremal problems" , Moscow  (1974)  (In Russian)</TD></TR><TR><TD valign="top">[10]</TD> <TD valign="top">  B.N. Pshenichnyi,  Yu.M. Danilin,  "Numerical methods in extremal problems" , MIR  (1978)  (Translated from Russian)</TD></TR><TR><TD valign="top">[11]</TD> <TD valign="top">  R. Glowinski,  "Numerical methods for nonlinear variational problems" , Springer  (1984)</TD></TR><TR><TD valign="top">[12]</TD> <TD valign="top">  V.V. Voevodin,  "Numerical methods of algebra. Theory and algorithms" , Moscow  (1966)  (In Russian)</TD></TR><TR><TD valign="top">[13]</TD> <TD valign="top">  E.G. D'yakonov,  "Minimization of computational work. Asymptotically-optimal algorithms" , Moscow  (1989)  (In Russian)</TD></TR><TR><TD valign="top">[14]</TD> <TD valign="top">  J.F. Traub,  "Iterative methods for the solution of equations" , Prentice-Hall  (1964)</TD></TR><TR><TD valign="top">[15]</TD> <TD valign="top">  A.A. Samarskii,  E.S. Nikolaev,  "Numerical methods for grid equations" , '''1–2''' , Birkhäuser  (1989)  (Translated from Russian)</TD></TR><TR><TD valign="top">[16]</TD> <TD valign="top">  R. Glowinski,  J.-L. Lions,  R. Trémolières,  "Numerical analysis of variational inequalities" , North-Holland  (1981)  (Translated from French)</TD></TR><TR><TD valign="top">[17]</TD> <TD valign="top">  W. Hackbusch,  "Multigrid solution of continuation problems"  R. Ansorge (ed.)  Th. Meis (ed.)  W. Törnig (ed.) , ''Iterative solution of nonlinear systems of equations'' , ''Lect. notes in math.'' , '''953''' , Springer  (1982)  pp. 20–44</TD></TR><TR><TD valign="top">[18]</TD> <TD valign="top">  F. Thomasset,  "Implementation of finite element methods for Navier–Stokes equations" , Springer  (1981)</TD></TR><TR><TD valign="top">[19]</TD> <TD valign="top">  W. Hackbusch (ed.)  V. Trottenberg (ed.) , ''Multigrid Methods. Proc. Köln-Porz, 1981'' , ''Lect. notes in math.'' , '''960''' , Springer  (1982)</TD></TR></table>
 
 
  
 
====Comments====
 
====Comments====
 
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  J.W. Daniel,  "The approximate minimization of functionals" , Prentice-Hall  (1971)</TD></TR></table>
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  J.W. Daniel,  "The approximate minimization of functionals" , Prentice-Hall  (1971)</TD></TR></table>

Latest revision as of 14:54, 7 June 2020


Iteration methods for the solution of non-linear equations.

By a non-linear equation one means (see [1][3]) an algebraic or transcendental equation of the form

$$ \tag{1 } \phi ( x) = 0, $$

where $ x $ is a real variable and $ \phi ( x) $ a non-linear function, and by a system of non-linear equations a system of the form

$$ \tag{2 } \left . \begin{array}{c} \phi _ {1} ( x _ {1} \dots x _ {N} ) = 0 , \\ {\dots \dots \dots } \\ \phi _ {N} ( x _ {1} \dots x _ {N} ) = 0 , \\ \end{array} \right \} $$

that is not linear; the solutions of (2) are $ N $- dimensional vectors $ x = ( x _ {1} \dots x _ {N} ) $. Equation (1) and system (2) can be treated as a non-linear operator equation:

$$ \tag{3 } L ( u) = f $$

with a non-linear operator $ L $ acting from a finite-dimensional vector subspace $ H _ {N} $ into $ H _ {N} $.

Numerical methods for the solution of a non-linear equation (3) are called iteration methods if they are defined by the transition from a known approximation $ u ^ {n} $ at the $ n $- th iteration to a new iteration $ u ^ {n+} 1 $ and allow one to find in a sufficiently large number of iterations a solution of (3) within prescribed accuracy $ \epsilon $. The most important iteration methods for the approximate solution of (3), both of a general and of a special form, characteristic for discrete (grid) methods for solving boundary value problems for equations and systems of partial differential equations of strongly-elliptic type, are the object of study of the present article. Non-linear operator equations connected with the discussion of infinite-dimensional spaces (see, for example [4][8]) are a very broad mathematical concept, including as special cases, for example, non-linear integral equations and non-linear boundary value problems. Numerical methods for the approximate solution of them include also methods for their approximation by finite-dimensional equations; these methods are treated separately.

One of the most important methods for solving an equation (3) is the simple iteration method (successive substitution), which assumes that one can replace (3) by an equivalent system

$$ \tag{4 } u = P ( u) , $$

where $ u = ( u _ {1} \dots u _ {N} ) $ is an element of a finite-dimensional normed space $ H _ {N} $ and $ P $, mapping $ H _ {N} $ into $ H _ {N} $, is a contractive operator:

$$ \tag{5 } \exists q < 1 \ \forall u , v \ \| P ( u) - P ( v) \| \leq \ q \| u - v \| . $$

Then by a general contractive-mapping principle (see [1][4] and Contracting-mapping principle) equation (1) has a unique solution, the method of simple iteration

$$ \tag{6 } u ^ {n+} 1 = P ( u ^ {n} ) ,\ \ n = 0 , 1 \dots $$

converges for any initial approximation $ u ^ {0} $, and the error $ z ^ {n} $ at the $ n $- th iteration satisfies the estimate

$$ \| z ^ {n} \| \leq \ q ^ {n} ( 1- q ) ^ {-} 1 \| u ^ {0} - u ^ {1} \| . $$

Suppose that some solution $ \overline{u}\; $ of (3) has a surrounding ball $ S ( \overline{u}\; , R ) \equiv \{ {v } : {\| v - \overline{u}\; \| \leq R } \} $ such that the system (3) considered together with the additional condition

$$ \tag{7 } u \in S ( \overline{u}\; , R ) $$

is equivalent to (4) considered together with (7), and that (5) holds for $ u = \overline{u}\; $ and any $ v = \overline{u}\; + z $ with $ \| z \| \leq R $. Then for a choice of an initial approximation $ u ^ {0} $ from $ S ( \overline{u}\; , R ) $ in the method (6) convergence of $ u ^ {n} $ to $ \overline{u}\; $ with an error estimate $ \| z ^ {n} \| \leq q ^ {n} R $ is guaranteed.

For twice continuously-differentiable functions $ \phi _ {i} $, if a good initial approximation to a solution of the system (2) is available, frequently an effective means of improving the accuracy is the Newton–Kantorovich method, in which an equation $ \phi _ {1} ( x _ {1} \dots x _ {N} ) = 0 $ from (2) determining a certain surface $ P _ {i} $ is replaced by the equation of the tangent surface to $ P _ {i} $ at the point $ x ^ {n} $, where $ x ^ {n} $ is a previously obtained approximation to a solution of (2) (see [1][5]). Under certain additional conditions the Newton–Kantorovich method leads to an error estimate of the form

$$ \| z ^ {n} \| \leq \ c _ {1} ( c \| z ^ {0} \| ) ^ {2 ^ {n} } , $$

where $ c _ {i} $ and $ c $ are certain constants. At every iteration of this method one has to solve the system of linear algebraic equations with matrix

$$ A ^ {(} n) \equiv \ \left . \left ( \frac{\partial \phi _ {i} }{\partial x _ {j} } \right ) \right | _ {x = x ^ {n} } . $$

Sometimes one keeps this matrix fixed for several iterations, sometimes one replaces the derivatives $ \partial \phi _ {i} / \partial x _ {j} $ by difference approximations.

The Newton–Kantorovich method belongs to the group of linearization methods (3). Another method in this group is the secant method.

A large number of iteration methods (the so-called descent methods) (see [1][3], [9], [10][13]) are based on replacing the solving of the equations (3) by minimizing a certain functional $ I ( u) $( cf. also Descent, method of). For example, for $ I ( u) $ one can take

$$ \tag{8 } I ( u) \equiv \| L ( u) - f \| ^ {2} . $$

In a number of cases when the initial non-linear equations are the Euler equations for the problem of minimizing a certain functional $ I ( u) $, such a variational formulation of the problem is even more natural; the operators $ L $ in similar situations are gradients of the functional $ I ( u) $ and are called potential operators (see [5][6]). Among the several versions of descent methods one can mention the methods of coordinate-wise descent, several gradient methods and, in particular, the method of steepest descent, the method of conjugate gradients, and others, and also their modifications (see [2], [9], [10][13]). A number of iteration methods for the solution of the equations (3), describing a certain stationary state, can be treated as discretizations of the corresponding non-stationary problems. Therefore, methods of this class are called stabilization methods (cf. Adjustment method and, for example, [2]). Examples of such non-stationary problems are problems described by a system of ordinary differential equations

$$ A _ {1} \frac{d ^ {2} u }{d t ^ {2} } + A _ {2} \frac{d u }{d t } + L ( u) - f = 0 . $$

The introduction of an additional independent variable is characteristic for the method of differentiation with respect to a parameter (a continuation method, see [5], [13]). Its essence consists in the introduction of an auxiliary parameter $ \lambda \in [ 0 , 1 ] $, the choice of continuously-differentiable functions $ F _ {i} ( x , \lambda ) $, and replacement of (2) by the system

$$ \tag{9 } F _ {i} ( x , \lambda ) = 0 ,\ \ i = 1 \dots N ,\ \ 0 \leq \lambda \leq 1 ; $$

for $ \lambda = 0 $ the system (9) must be easy to solve, and the functions $ F _ {i} ( x , 1 ) $ must coincide with the $ \phi _ {i} ( x) $, $ i = 1 \dots N $. Generally speaking, the system (9) determines

$$ x ( \lambda ) \equiv \ ( x _ {1} ( \lambda ) \dots x _ {N} ( \lambda ) ) $$

as a function of $ \lambda $, and the required solution of (2) is $ x ( 1) $. If (9) is differentiable with respect to $ \lambda $, the result is a system of ordinary differential equations

$$ \tag{10 } \sum _ { i= } 1 ^ { N } \frac{\partial F _ {i} }{\partial x _ {j} } \frac{d x _ {j} }{d \lambda } + \frac{\partial F _ {i} }{\partial \lambda } = 0 ,\ \ i = 1 \dots N . $$

If one solves the Cauchy problem for it on $ 0 \leq \lambda \leq 1 $ with initial conditions that are solutions of the system

$$ F _ {i} ( x , 0 ) = 0 ,\ \ i = 1 \dots N , $$

then one finds a solution of (2). Discretization of (10) with respect to $ \lambda $ leads to a numerical method for solving the system (2).

In the method of continuation with respect to a parameter (cf. Continuation method (to a parametrized family, for non-linear operators)) the system (9) is solved for $ \lambda = \tau \dots m \tau $, $ m \tau = 1 $, and for every one of these values of $ \lambda $ one applies a certain iteration method with an initial approximation that agrees with the approximation obtained by solving the system for the preceding value of $ \lambda $. Both these methods are in essence iteration methods for solving (2) with a special procedure for finding a good initial approximation.

In the case of systems the problem of localizing the solution causes great difficulty. Since the majority of iteration methods converge only in the presence of a fairly good approximation to a solution, the two methods described above often make it possible to dispense with the need to localize a solution directly. For localization one also frequently uses theorems based on topological principles and on the monotonicity of operators (see [4][8]).

For the solution of equations (1), which are the simplest special cases of (3), the number of known iteration methods applicable in practice is very large (see, for example, [1][3], [12], [14]). Apart from the ones already considered one can mention, for example, the iteration methods of higher order (see [1], [14]), which include Newton's methods as a special case, and the many iteration methods especially oriented to finding real or complex roots of polynomials

$$ p _ {n} ( z) \equiv \ a _ {n} z ^ {n} + a _ {n-} 1 z ^ {n-} 1 + \dots + a _ {0} , $$

where the $ a _ {i} $ are real or complex numbers (see [1], [2]).

The problem of localizing the solutions of (1) reduces to a search for an interval at the ends of which the continuous function $ \phi $ takes values of opposite sign. When $ \phi $ is a polynomial, the task is less subtle, since theoretical bounds are known (see [1]) and there are methods for finding all roots with required accuracy without giving good initial approximations to them (see [12]).

Iteration methods for solving equations (3) that arise in grid analogues of non-linear boundary value problems for partial differential equations are special cases of methods for solving grid systems (see, for example, [13], [15][19]). Probably one of the most intensively applied methods for solving (3) is a modified method of simple iteration, which can be can be written in the form

$$ \tag{11 } B u ^ {n+} 1 = B u ^ {n} - \gamma ( L ( u ^ {n} ) - f ) , $$

where (3) is regarded as an operator equation in the $ N $- dimensional space $ H _ {N} = H $, and $ B \in S $, where $ S $ stands for the set of symmetric positive linear operators mapping $ H $ into itself. An expedient study of such methods should proceed not in $ H $, but in the space $ H _ {B} $ with the new scalar product

$$ ( u , v ) _ {H _ {B} } \equiv \ ( B u , v ) ,\ \ \| u \| ^ {2} \equiv \ ( B u , u ) , $$

where $ ( u , v ) $ is the scalar product in $ H $.

If the operator $ L $ satisfies the conditions of strict monotonicity and is Lipschitz continuous:

$$ \tag{12 } \forall u , v \ ( L ( u) - L ( v) , u - v ) \geq \delta _ {0} \| u - v \| _ {B} ^ {2} ,\ \ \delta _ {0} > 0 , $$

$$ \tag{13 } \forall u , v \ \| L ( u) - L ( v) \| {B ^ {- 1 } } ^ {2} \leq \delta _ {2} \| u - v \| _ {B} ^ {2} , $$

then (3) has a unique solution and the method (11), for a suitable choice of $ \gamma $, converges for any $ u ^ {0} $ with error estimate

$$ \tag{14 } \| z ^ {n} \| _ {B} \leq q ^ {n} \| z ^ {0} \| _ {B} , $$

where $ q \equiv q ( \delta _ {0} , \delta _ {2} , \gamma ) < 1 $( see [13], [15]).

In the most general version of this theorem it suffices to require that (12) and (13) hold for a solution $ u $ and all $ v $ in a ball $ S _ {B} ( u , R ) \equiv \{ {v } : {\| v - u \| \leq R } \} $ and that $ u ^ {0} $ lies in this ball (see [13]). In this case the constants $ \delta _ {0} $ and $ \delta _ {2} $ may depend on $ R $. To verify these conditions it suffices, e.g., to localize $ u $ by means of a priori estimates, obtaining $ \| u \| _ {B} \leq R _ {0} $, and then to take $ S _ {B} ( u , R ) \equiv S _ {B} ( u , R _ {1} - R _ {0} ) $, provided that (12) and (13) hold for any $ u $ and $ v $ in $ S _ {B} ( 0 , R _ {1} ) $, where $ R _ {1} > R _ {0} $. The constant $ q $ in (14) can be diminished if $ L $ is differentiable and for its derivatives $ L _ {v} ^ \prime $, represented as the sum of the symmetric part $ L _ {v , \mathop{\rm sym} } ^ \prime $ and the skew-symmetric part $ L _ {v , \textrm{ skew } } ^ \prime $ the inequalities

$$ \forall v \in S _ {B} ( u , R ) \ \sigma _ {0} B \leq L _ {v , \mathop{\rm sym} } ^ \prime \leq \sigma _ {1} B ,\ \sigma _ {0} > 0 ; $$

$$ \forall w \| L _ {v , \textrm{ skew } } ^ \prime w \| _ {B ^ {- 1 } } ^ {2} \leq \sigma _ {2} \| w \| _ {B} ^ {2} $$

are known. Then $ q \equiv q ( \gamma , \sigma _ {0} , \sigma _ {1} , \sigma _ {2} ) $( see [11], [13], [15]). Sometimes in the discussion of certain types of non-linearity it is advisable to use instead of (12) and (13) inequalities like

$$ ( L ( u) - L ( v) , B ( u - v ) ) \geq \overline \delta \; _ {0} \| B ( u - v ) \| ^ {2} ,\ \ \overline \delta \; _ {0} > 0 ; $$

$$ \| L ( u) - L ( v) \| ^ {2} \leq \overline \delta \; _ {2} \| B ( u - v ) \| ^ {2} $$

(see [13]). For the operators $ B $ in (11) one can use, for example, splitting difference operators (the method of alternating directions) or factorized difference operators (the alternating-triangle method, the incomplete matrix factorization method), and others. Most attractive from the asymptotic point of view is the use of operators $ B $ such that the constants $ \delta _ {i} $ or $ \overline \delta \; _ {i} $ do not depend on the dimension of the space $ H _ {N} $( see [13]) and the operators $ B $ are sufficiently simple. Along these lines one succeeds in a number of cases in constructing iteration methods that make it possible to find a solution of (3) with accuracy $ \epsilon $ at the expense of altogether $ O ( N \mathop{\rm log} N | \mathop{\rm log} \epsilon | ) $( or even $ O ( N \mathop{\rm log} N ) $) arithmetic operations for $ \epsilon \approx N ^ {- \alpha } $, $ \alpha > 0 $( see [13]), if the computational work in the evaluation of $ L ( u ^ {n} ) $ for a given $ u ^ {n} $ can be estimated by $ O ( N) $. To verify conditions of the type (12) and (13), in many cases the grid (difference) analogues of the Sobolev imbedding theorems turn out to be very powerful (see [13]). It is important to take into account the specific nature of the non-linearity. For example, when $ L ( u) = \Lambda u + Pu $, where $ \Lambda $ is a positive linear operator and $ P $ a quadratic non-linear operator having the property of "skew symmetryskew-symmetry" (that is, $ ( P u , u ) = 0 $ for all $ u $), one often succeeds in obtaining the constant $ \delta _ {2} ( R) $ in any ball $ S _ {B} ( 0 , R ) $ and $ \delta _ {0} $ depending only on $ \| u \| _ {B} $; then (11) converges for any $ u ^ {0} $( see [13]). In a number of cases one can replace the original problem on the basis of a priori estimates by an equivalent one for which the required conditions hold in the whole space.

References

[1] I.S. Berezin, N.P. Zhidkov, "Computing methods" , Pergamon (1973) (Translated from Russian)
[2] N.S. Bakhvalov, "Numerical methods: analysis, algebra, ordinary differential equations" , MIR (1977) (Translated from Russian)
[3] J.M. Ortega, W.C. Rheinboldt, "Iterative solution of non-linear equations in several variables" , Acad. Press (1970)
[4] M.A. Krasnosel'skii, G.M. Vainikko, P.P. Zabreiko, et al., "Approximate solution of operator equations" , Wolters-Noordhoff (1972) (Translated from Russian)
[5] S.G. Mikhlin, "The numerical performance of variational methods" , Wolters-Noordhoff (1971) (Translated from Russian)
[6] M.M. Vainberg, "Variational method and method of monotone operators in the theory of nonlinear equations" , Wiley (1973) (Translated from Russian)
[7] H. Gajewski, K. Gröger, K. Zacharias, "Nichtlineare Operatorengleichungen und Operatorendifferentialgleichungen" , Akademie Verlag (1974)
[8] J.-L. Lions, "Quelques méthodes de résolution des problèmes aux limites nonlineaires" , Dunod (1969)
[9] F.P. Vasil'ev, "Lectures on methods for solving extremal problems" , Moscow (1974) (In Russian)
[10] B.N. Pshenichnyi, Yu.M. Danilin, "Numerical methods in extremal problems" , MIR (1978) (Translated from Russian)
[11] R. Glowinski, "Numerical methods for nonlinear variational problems" , Springer (1984)
[12] V.V. Voevodin, "Numerical methods of algebra. Theory and algorithms" , Moscow (1966) (In Russian)
[13] E.G. D'yakonov, "Minimization of computational work. Asymptotically-optimal algorithms" , Moscow (1989) (In Russian)
[14] J.F. Traub, "Iterative methods for the solution of equations" , Prentice-Hall (1964)
[15] A.A. Samarskii, E.S. Nikolaev, "Numerical methods for grid equations" , 1–2 , Birkhäuser (1989) (Translated from Russian)
[16] R. Glowinski, J.-L. Lions, R. Trémolières, "Numerical analysis of variational inequalities" , North-Holland (1981) (Translated from French)
[17] W. Hackbusch, "Multigrid solution of continuation problems" R. Ansorge (ed.) Th. Meis (ed.) W. Törnig (ed.) , Iterative solution of nonlinear systems of equations , Lect. notes in math. , 953 , Springer (1982) pp. 20–44
[18] F. Thomasset, "Implementation of finite element methods for Navier–Stokes equations" , Springer (1981)
[19] W. Hackbusch (ed.) V. Trottenberg (ed.) , Multigrid Methods. Proc. Köln-Porz, 1981 , Lect. notes in math. , 960 , Springer (1982)

Comments

References

[a1] J.W. Daniel, "The approximate minimization of functionals" , Prentice-Hall (1971)
How to Cite This Entry:
Non-linear equation, numerical methods. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Non-linear_equation,_numerical_methods&oldid=49493
This article was adapted from an original article by E.G. D'yakonov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article