Namespaces
Variants
Actions

Difference between revisions of "Interpolation in numerical mathematics"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (tex encoded by computer)
 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
 +
<!--
 +
i0519501.png
 +
$#A+1 = 232 n = 0
 +
$#C+1 = 232 : ~/encyclopedia/old_files/data/I051/I.0501950 Interpolation in numerical mathematics
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
A method for approximating or precisely finding some quantity by known individual values of it or of other quantities related to it. On the basis of interpolation a whole series of approximate methods for solving mathematical problems has been developed.
 
A method for approximating or precisely finding some quantity by known individual values of it or of other quantities related to it. On the basis of interpolation a whole series of approximate methods for solving mathematical problems has been developed.
  
Line 4: Line 16:
  
 
==The approximate representation and calculation of functions.==
 
==The approximate representation and calculation of functions.==
Interpolation of functions is considered as one of the ways of approximating them. Interpolating a function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i0519501.png" /> on a segment <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i0519502.png" /> by its values at the nodes <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i0519503.png" /> of a grid <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i0519504.png" /> means constructing another function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i0519505.png" /> such that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i0519506.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i0519507.png" />. In a more general setting, the problem of interpolating a function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i0519508.png" /> consists of constructing <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i0519509.png" /> not only by prescribing values on a grid <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195010.png" />, but also derivatives at individual nodes, up to a certain order, or by describing some other relation connecting <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195011.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195012.png" />.
+
Interpolation of functions is considered as one of the ways of approximating them. Interpolating a function $  f ( x) $
 +
on a segment $  [ a , b ] $
 +
by its values at the nodes $  x _ {k} $
 +
of a grid $  \Delta _ {n} = \{ a \leq  x _ {0} < \dots < x _ {n} \leq  b \} $
 +
means constructing another function $  L _ {n} ( x) \equiv L _ {n} ( f ;  x ) $
 +
such that $  L _ {n} ( x _ {k} ) = f ( x _ {k} ) $,  
 +
$  k = 0 \dots n $.  
 +
In a more general setting, the problem of interpolating a function $  f ( x) $
 +
consists of constructing $  L _ {n} ( x) $
 +
not only by prescribing values on a grid $  \Delta _ {n} $,  
 +
but also derivatives at individual nodes, up to a certain order, or by describing some other relation connecting $  f ( x) $
 +
and  $  L _ {n} ( x) $.
 +
 
 +
Usually  $  L _ {n} ( x) $
 +
is constructed in the form
 +
 
 +
$$
 +
L _ {n} ( x)  = \sum _ { i= } 0 ^ { n }  a _ {i} \phi _ {i} ( x) ,
 +
$$
 +
 
 +
where  $  \{ \phi _ {i} ( x) \} $
 +
is a certain system of linearly independent functions, chosen in advance. Such an interpolation is called linear with respect to  $  \{ \phi _ {i} ( x) \} $,
 +
while  $  L _ {n} ( x) $
 +
is called an interpolation polynomial in the system  $  \{ \phi _ {i} ( x) \} $
 +
or an interpolation function.
 +
 
 +
The choice of  $  \{ \phi _ {i} ( x) \} $
 +
is determined by the properties of the function class for which the interpolation formula is constructed. E.g., for the approximation of  $  2 \pi $-
 +
periodic functions on  $  [ 0 , 2 \pi ] $
 +
one naturally takes the trigonometric system for  $  \{ \phi _ {i} ( x) \} $,
 +
for the approximation of bounded or increasing functions on  $  [ 0 , \infty ) $
 +
one takes the system of rational or exponential functions, taking into account the behaviour of the functions to be approximated at infinity, etc.
 +
 
 +
Most often one uses algebraic interpolation:  $  \phi _ {i} ( x) = x  ^ {i} $;
 +
its simplest variant (linear interpolation with two nodes  $  x _ {k} $
 +
and  $  x _ {k+} 1 $)
 +
is defined by the formula
  
Usually <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195013.png" /> is constructed in the form
+
$$ \tag{1 }
 +
L _ {1} ( x)  = \
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195014.png" /></td> </tr></table>
+
\frac{x - x _ {k} }{x _ {k+} 1 - x _ {k} }
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195015.png" /> is a certain system of linearly independent functions, chosen in advance. Such an interpolation is called linear with respect to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195016.png" />, while <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195017.png" /> is called an interpolation polynomial in the system <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195018.png" /> or an interpolation function.
+
[ f ( x _ {k+} 1 ) - f ( x _ {k} ) ] + f ( x _ {k} ) ,
 +
$$
  
The choice of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195019.png" /> is determined by the properties of the function class for which the interpolation formula is constructed. E.g., for the approximation of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195020.png" />-periodic functions on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195021.png" /> one naturally takes the trigonometric system for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195022.png" />, for the approximation of bounded or increasing functions on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195023.png" /> one takes the system of rational or exponential functions, taking into account the behaviour of the functions to be approximated at infinity, etc.
+
$$
 +
x _ {k}  \leq  x  \leq  x _ {k+} 1 .
 +
$$
  
Most often one uses algebraic interpolation: <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195024.png" />; its simplest variant (linear interpolation with two nodes <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195025.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195026.png" />) is defined by the formula
+
Algebraic interpolation of a very high order is seldom used in practice in the problem of approximating functions on an entire segment  $  [ a , b ] $.  
 +
One usually restricts oneself to linear interpolation by (1) or to quadratic interpolation with three nodes on particular segments of the grid, by the formula
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195027.png" /></td> <td valign="top" style="width:5%;text-align:right;">(1)</td></tr></table>
+
$$ \tag{2 }
 +
L _ {2} ( x) = \
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195028.png" /></td> </tr></table>
+
\frac{( x - x _ {k} ) ( x - x _ {k+} 1 ) }{( x _ {k-} 1 - x _ {k} ) ( x _ {k-} 1 - x _ {k+} 1 ) }
  
Algebraic interpolation of a very high order is seldom used in practice in the problem of approximating functions on an entire segment <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195029.png" />. One usually restricts oneself to linear interpolation by (1) or to quadratic interpolation with three nodes on particular segments of the grid, by the formula
+
f ( x _ {k-} 1 ) +
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195030.png" /></td> <td valign="top" style="width:5%;text-align:right;">(2)</td></tr></table>
+
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195031.png" /></td> </tr></table>
+
\frac{( x - x _ {k-} 1 ) ( x - x _ {k+} 1 ) }{(
 +
x _ {k} - x _ {k-} 1 ) ( x _ {k} - x _ {k+} 1 ) }
 +
f ( x _ {k} ) +
 +
\frac{( x - x _ {k-} 1 ) ( x - x _ {k} ) }{( x _ {k+} 1 - x _ {k-} 1 ) ( x _ {k+} 1 - x _ {k} ) }
 +
f ( x _ {k+} 1 ) ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195032.png" /></td> </tr></table>
+
$$
 +
x _ {k-} 1  \leq  x  \leq  x _ {k+} 1 .
 +
$$
  
 
There are several ways of writing the algebraic interpolation polynomials (cf. [[Interpolation formula|Interpolation formula]]). Interpolation by splines gains increasing application (cf. [[Spline|Spline]]; [[Interpolation spline|Interpolation spline]])
 
There are several ways of writing the algebraic interpolation polynomials (cf. [[Interpolation formula|Interpolation formula]]). Interpolation by splines gains increasing application (cf. [[Spline|Spline]]; [[Interpolation spline|Interpolation spline]])
  
Parabolic or cubic splines are most often used in practice. An interpolation spline of defect 1 for a function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195033.png" /> with respect to a given grid <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195034.png" /> is a function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195035.png" /> that is a polynomial of degree three on each segment <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195036.png" />, belongs to the class of twice continuously-differentiable functions, and satisfies the conditions
+
Parabolic or cubic splines are most often used in practice. An interpolation spline of defect 1 for a function $  f( x) $
 +
with respect to a given grid $  \Delta _ {n} $
 +
is a function $  S _ {3} ( x) \equiv S _ {3} ( f ;  x ) $
 +
that is a polynomial of degree three on each segment $  [ x _ {k-} 1 , x _ {k} ] $,
 +
belongs to the class of twice continuously-differentiable functions, and satisfies the conditions
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195037.png" /></td> </tr></table>
+
$$
 +
S _ {3} ( x _ {k} )  = f ( x _ {k} ) ,\ \
 +
k = 0 \dots n ; \
 +
n \geq  2 .
 +
$$
  
There are still two free parameters in this definition; these are determined by additional boundary conditions: <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195038.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195039.png" />; <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195040.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195041.png" />, or other conditions.
+
There are still two free parameters in this definition; these are determined by additional boundary conditions: $  S _ {3}  ^ {(} i) ( a) = S _ {3}  ^ {(} i) ( b) $,  
 +
$  i = 1 , 2 $;  
 +
$  S _ {3}  ^  \prime  ( a) = a _ {n} $,  
 +
$  S _ {3}  ^  \prime  ( b) = b _ {n} $,  
 +
or other conditions.
  
As well as immediately in the problem of approximating functions, splines are also used in solving other problems; the splines are required then not only to coincide on a grid <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195042.png" /> with the values of a function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195043.png" />, but also with those of the derivatives of this function, up to a certain order.
+
As well as immediately in the problem of approximating functions, splines are also used in solving other problems; the splines are required then not only to coincide on a grid $  \Delta _ {n} $
 +
with the values of a function $  f ( x) $,  
 +
but also with those of the derivatives of this function, up to a certain order.
  
When processing empirical data <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195044.png" /> one often determines the coefficients <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195045.png" /> in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195046.png" /> by requiring
+
When processing empirical data $  \{ y _ {k} \} $
 +
one often determines the coefficients $  a _ {i} $
 +
in $  L _ {n} ( x) $
 +
by requiring
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195047.png" /></td> </tr></table>
+
$$
 +
= \sum _ { k= } 1 ^ { m }
 +
[ y _ {k} - L _ {n} ( x _ {k} ) ]  ^ {2} ,\ \
 +
m \geq  n ,
 +
$$
  
to be minimal. The function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195048.png" /> thus constructed is called the interpolation function in the sense of least squares.
+
to be minimal. The function $  L _ {n} ( x) $
 +
thus constructed is called the interpolation function in the sense of least squares.
  
The interpolation of functions in several variables meets with a number of principal and numerical difficulties. E.g., in the case of algebraic interpolation the Lagrange interpolation polynomial of fixed degree need not, generally speaking, exist for an arbitrary system of different nodes. In particular, for a function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195049.png" /> in two variables such a polynomials <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195050.png" /> of total degree at most <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195051.png" /> can be constructed for nodes <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195052.png" /> only if these nodes do not lie on an algebraic curve of order <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195053.png" />.
+
The interpolation of functions in several variables meets with a number of principal and numerical difficulties. E.g., in the case of algebraic interpolation the Lagrange interpolation polynomial of fixed degree need not, generally speaking, exist for an arbitrary system of different nodes. In particular, for a function $  f ( x , y ) $
 +
in two variables such a polynomials $  L _ {n} ( x , y ) $
 +
of total degree at most $  n $
 +
can be constructed for nodes $  ( x _ {k} , y _ {k} ) $
 +
only if these nodes do not lie on an algebraic curve of order $  n $.
  
Another approach to the interpolation of functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195054.png" /> in several variables is that one first interpolates the function with respect to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195055.png" /> for fixed <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195056.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195057.png" />, then with respect to the next variable <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195058.png" /> for fixed remaining nodes, etc. Now the interpolation polynomial <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195059.png" /> for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195060.png" /> with nodes
+
Another approach to the interpolation of functions $  f ( x _ {1} \dots x _ {m} ) $
 +
in several variables is that one first interpolates the function with respect to $  x _ {1} $
 +
for fixed $  x _ {k} $,  
 +
$  k = 2 \dots m $,  
 +
then with respect to the next variable $  x _ {2} $
 +
for fixed remaining nodes, etc. Now the interpolation polynomial $  L _ {n _ {1}  \dots n _ {m} } ( x _ {1} \dots x _ {m} ) $
 +
for $  f ( x _ {1} \dots x _ {m} ) $
 +
with nodes
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195061.png" /></td> </tr></table>
+
$$
 +
( x _ {1} ^ {k _ {1} } \dots x _ {m} ^ {k _ {m} } ) ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195062.png" /></td> </tr></table>
+
$$
 +
x _ {j}  ^  \nu  \neq  x _ {j}  ^  \mu  ,\  \nu \neq \mu ; \ \
 +
k _ {j} = 0 \dots n _ {j} ; \  j = 1 \dots m ,
 +
$$
  
 
has the form:
 
has the form:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195063.png" /></td> </tr></table>
+
$$
 +
L _ {n _ {1}  \dots n _ {m} } ( x _ {1} \dots x _ {m} )  =
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195064.png" /></td> </tr></table>
+
$$
 +
\sum _ { k _ {1} \dots k _ {m} = 0 } ^ { {n _ 1 } \dots n _ {m} }
 +
\frac{\omega _ {n _ {1}  } ( x _ {1} ) \dots \omega _ {n _ {m}  } ( x _ {m} ) }{\omega _ {n _ {1}  }  ^  \prime  ( x _ {1} ^ {k _ {1} } ) {}
 +
\dots \omega _ {n _ {m}  }  ^  \prime  ( x _ {m} ^ {k _ {m} } ) }
 +
 +
\frac{
 +
f ( x _ {1} ^ {k _ {1} } \dots x _ {m} ^ {k _ {m}  } ) }{( x _ {1} - x _ {1} ^ {k _ {1} } )
 +
\dots ( x _ {m} - x _ {m} ^ {k _ {m} } ) }
 +
,
 +
$$
  
 
where
 
where
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195065.png" /></td> </tr></table>
+
$$
 +
\omega _ {n _ {j}  } ( x _ {j} )  = \
 +
\prod _ { k _ {j} = 0 } ^ { {n _ j } }
 +
( x _ {j} - x _ {j} ^ {k _ {j} } ) ,\ \
 +
j = 1 \dots m .
 +
$$
  
 
Interpolation splines for functions of several variables are defined on a multi-dimensional grid, with corresponding changes, in analogy with the one-dimensional case. Interpolation of functions is used for replacing complicate functions by simpler ones that can be calculated quicker; for the approximate recovery of functions on their entire domain of values by their values at individual points; and for obtaining smoother functions described by a running process. This kind of problems is of both independent interest and arises in an auxiliary fashion in many branches of science and technology in solving complex problems. Interpolation of functions is also used in approximately finding limit values of functions, in problems of accelerating the convergence of series or sequences, etc.
 
Interpolation splines for functions of several variables are defined on a multi-dimensional grid, with corresponding changes, in analogy with the one-dimensional case. Interpolation of functions is used for replacing complicate functions by simpler ones that can be calculated quicker; for the approximate recovery of functions on their entire domain of values by their values at individual points; and for obtaining smoother functions described by a running process. This kind of problems is of both independent interest and arises in an auxiliary fashion in many branches of science and technology in solving complex problems. Interpolation of functions is also used in approximately finding limit values of functions, in problems of accelerating the convergence of series or sequences, etc.
  
 
==Numerically solving systems of non-linear equations.==
 
==Numerically solving systems of non-linear equations.==
The general ideas for constructing interpolation methods for solving an equation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195066.png" /> or a system of equations <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195067.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195068.png" />, are the same. The difficulties of the problem of interpolating functions of several variables especially arise when investigating and practically using this kind of methods for a large number of equations. The basic principle for obtaining interpolation methods for solving an equation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195069.png" /> is the replacement of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195070.png" /> by its interpolation polynomials <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195071.png" /> and subsequently solving <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195072.png" />. The roots of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195073.png" /> are taken as approximate values of those of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195074.png" />. The interpolation polynomial <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195075.png" /> is also used in constructing iteration methods for solving <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195076.png" />.
+
The general ideas for constructing interpolation methods for solving an equation $  f ( x) = 0 $
 +
or a system of equations $  f _ {i} ( x _ {1} \dots x _ {m} ) = 0 $,
 +
i = 1 \dots m $,  
 +
are the same. The difficulties of the problem of interpolating functions of several variables especially arise when investigating and practically using this kind of methods for a large number of equations. The basic principle for obtaining interpolation methods for solving an equation $  f( x)= 0 $
 +
is the replacement of $  f ( x) $
 +
by its interpolation polynomials $  L _ {n} ( x) $
 +
and subsequently solving $  L _ {n} ( x) = 0 $.  
 +
The roots of $  L _ {n} ( x) = 0 $
 +
are taken as approximate values of those of $  f ( x) = 0 $.  
 +
The interpolation polynomial $  L _ {n} ( x) $
 +
is also used in constructing iteration methods for solving $  f( x) = 0 $.
 +
 
 +
E.g., taking for  $  x _ {n+} 1 $
 +
the root of the linear algebraic interpolation polynomial constructed with respect to the values  $  f ( x _ {n} ) $
 +
and  $  f ^ { \prime } ( x _ {n} ) $
 +
at  $  x _ {n} $,
 +
or with respect to the values  $  f ( x _ {n-} 1 ) $
 +
and  $  f ( x _ {n} ) $
 +
at  $  x _ {n-} 1 $
 +
and  $  x _ {n} $,
 +
leads to the method of Newton (cf. [[Newton method|Newton method]]) and the [[Secant method|secant method]], respectively:
 +
 
 +
$$
 +
x _ {n+} 1  =  x _ {n} -
 +
 
 +
\frac{f ( x _ {n} ) }{f ^ { \prime } ( x _ {n} ) }
 +
,\ \
 +
x _ {n+} 1  =  x _ {n} -
  
E.g., taking for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195077.png" /> the root of the linear algebraic interpolation polynomial constructed with respect to the values <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195078.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195079.png" /> at <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195080.png" />, or with respect to the values <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195081.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195082.png" /> at <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195083.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195084.png" />, leads to the method of Newton (cf. [[Newton method|Newton method]]) and the [[Secant method|secant method]], respectively:
+
\frac{f ( x _ {n} ) }{f ( x _ {n-} 1 , x _ {n} ) }
 +
,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195085.png" /></td> </tr></table>
+
where  $  f ( x _ {n-} 1 , x _ {n} ) $
 +
is the [[divided difference]] of  $  f ( x) $
 +
at  $  x _ {n-} 1 $
 +
and  $  x _ {n} $.
 +
Under certain conditions, the sequence  $  \{ x _ {n} \} $
 +
converges, as  $  n \rightarrow \infty $,
 +
to a solution of  $  f ( x) = 0 $.
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195086.png" /> is the divided difference of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195087.png" /> at <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195088.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195089.png" />. Under certain conditions, the sequence <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195090.png" /> converges, as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195091.png" />, to a solution of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195092.png" />.
+
Another way to construct methods for solving an equation  $  f ( x) = 0 $
 +
is based on interpolating the inverse function  $  x = g ( y) $.  
 +
Suppose that for interpolating  $  g ( y) $
 +
the algebraic Lagrange interpolation polynomial
  
Another way to construct methods for solving an equation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195093.png" /> is based on interpolating the inverse function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195094.png" />. Suppose that for interpolating <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195095.png" /> the algebraic Lagrange interpolation polynomial
+
$$
 +
L _ {n} ( y)  = \
 +
\sum _ { k= } 0 ^ { n }
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195096.png" /></td> </tr></table>
+
\frac{\omega _ {n} ( y) }{\omega _ {n}  ^  \prime  ( y _ {k} ) ( y - y _ {k} ) }
  
is taken. It is assumed that the inverse function exists in a neighbourhood of the required root of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195097.png" /> and that a table of values <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195098.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i05195099.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950100.png" />, is available. The next approximate value <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950101.png" /> is the value of the interpolation polynomial at zero:
+
g ( y _ {k} ) ,\ \
 +
\omega _ {n} ( y)  = \
 +
\prod _ { k= } 0 ^ { n }  ( y - y _ {k} ) ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950102.png" /></td> </tr></table>
+
is taken. It is assumed that the inverse function exists in a neighbourhood of the required root of  $  f ( x) = 0 $
 +
and that a table of values  $  x _ {k} $
 +
and  $  y _ {k} = f ( x _ {k} ) $,
 +
$  k = 0 \dots n $,
 +
is available. The next approximate value  $  x _ {n+} 1 $
 +
is the value of the interpolation polynomial at zero:
 +
 
 +
$$
 +
x _ {n+} 1  = - \omega _ {n} ( 0)
 +
\sum _ { k= } 0 ^ { n }
 +
 
 +
\frac{x _ {k} }{\omega _ {n}  ^  \prime  ( y _ {k} ) y _ {k} }
 +
.
 +
$$
  
 
==Numerical integration.==
 
==Numerical integration.==
Line 86: Line 258:
 
E.g., quadrature formulas of highest algebraic accuracy, also called Gauss quadrature formulas (cf. [[Gauss quadrature formula|Gauss quadrature formula]])
 
E.g., quadrature formulas of highest algebraic accuracy, also called Gauss quadrature formulas (cf. [[Gauss quadrature formula|Gauss quadrature formula]])
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950103.png" /></td> </tr></table>
+
$$
 +
\int\limits _ { a } ^ { b }
 +
p ( x) f ( x)  d x
 +
\approx \
 +
\sum _ { k= } 1 ^ { n }
 +
A _ {k} f ( x _ {k} ) ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950104.png" /> is a sign-definite weight function, are obtained by replacing <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950105.png" /> by algebraic interpolation polynomials constructed with respect to the roots <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950106.png" /> of the polynomial of degree <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950107.png" /> that is orthogonal to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950108.png" />.
+
where $  p ( x) $
 +
is a sign-definite weight function, are obtained by replacing $  f ( x) $
 +
by algebraic interpolation polynomials constructed with respect to the roots $  x _ {k} $
 +
of the polynomial of degree $  n $
 +
that is orthogonal to $  p ( x) $.
  
If one partitions the complete integration interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950109.png" /> in an even number <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950110.png" /> of equal parts of length <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950111.png" /> and if on each double part one replaces <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950112.png" /> by its quadratic interpolation polynomials with nodes at the end and middle points, then one is led to the compound [[Simpson formula|Simpson formula]]
+
If one partitions the complete integration interval $  [ a , b ] $
 +
in an even number $  n $
 +
of equal parts of length $  h = ( b - a ) / n $
 +
and if on each double part one replaces $  f ( x) $
 +
by its quadratic interpolation polynomials with nodes at the end and middle points, then one is led to the compound [[Simpson formula|Simpson formula]]
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950113.png" /></td> </tr></table>
+
$$
 +
\int\limits _ { a } ^ { b }  f ( x)  d x
 +
\approx \
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950114.png" /></td> </tr></table>
+
\frac{b - a }{3 n }
 +
\times
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950115.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950116.png" />.
+
$$
 +
\times
 +
[ f _ {0} + f _ {n} + 2
 +
( f _ {2} + f _ {4} + \dots + f _ {n-} 2 ) +
 +
4 ( f _ {1} + f _ {3} + \dots + f _ {n-} 1 ) ] ,
 +
$$
 +
 
 +
where $  f _ {k} = f ( a + k h ) $,  
 +
$  k = 0 \dots n $.
  
 
One may also take interpolation splines of some fixed degree as basis. The scheme for constructing formulas for the approximate computation of integrals explained above can be used in the multi-dimensional case as well.
 
One may also take interpolation splines of some fixed degree as basis. The scheme for constructing formulas for the approximate computation of integrals explained above can be used in the multi-dimensional case as well.
Line 103: Line 301:
 
Formulas for numerical differentiation are obtained as results of differentiating interpolation formulas. Here, as a rule, certain a priori information is available about the function to be differentiated, related to its smoothness.
 
Formulas for numerical differentiation are obtained as results of differentiating interpolation formulas. Here, as a rule, certain a priori information is available about the function to be differentiated, related to its smoothness.
  
Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950117.png" /> be a certain interpolation polynomial of a function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950118.png" />, and let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950119.png" /> be the remainder in the interpolation formula
+
Let $  L _ {n} ( x) $
 +
be a certain interpolation polynomial of a function $  f ( x) $,  
 +
and let $  R _ {n} ( x) $
 +
be the remainder in the interpolation formula
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950120.png" /></td> </tr></table>
+
$$
 +
f ( x)  = L _ {n} ( x) + R _ {n} ( x) .
 +
$$
  
 
If in
 
If in
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950121.png" /></td> </tr></table>
+
$$
 +
f ^ { ( i) } ( x)  = L _ {n}  ^ {(} i) ( x) + R _ {n} ^ { ( i) } ( x)
 +
$$
 +
 
 +
the quantity  $  R _ {n} ^ { ( i) } ( x) $
 +
is neglected, one obtains a formula for the approximate computation of the  $  i $-
 +
th derivative of  $  f ( x) $:
 +
 
 +
$$ \tag{3 }
 +
f ^ { ( i) } ( x)  \approx  L _ {n}  ^ {(} i) ( x) .
 +
$$
  
the quantity <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950122.png" /> is neglected, one obtains a formula for the approximate computation of the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950123.png" />-th derivative of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950124.png" />:
+
Formulas for numerical differentiation, with interpolation at their basis, are obtained from (3) depending on the choice of  $  L _ {n} ( x) $.  
 +
For numerical differentiation one uses, as a rule, approximate values of the function at nodes; the error in a formula of numerical differentiation not only depends on the manner of interpolating and the interpolation step, but also on the errors in the values of the function at the nodes which are used. E.g., in the case of the linear approximation (1)
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950125.png" /></td> <td valign="top" style="width:5%;text-align:right;">(3)</td></tr></table>
+
$$ \tag{4 }
 +
f ^ { \prime } ( x _ {k} ) \approx \
  
Formulas for numerical differentiation, with interpolation at their basis, are obtained from (3) depending on the choice of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950126.png" />. For numerical differentiation one uses, as a rule, approximate values of the function at nodes; the error in a formula of numerical differentiation not only depends on the manner of interpolating and the interpolation step, but also on the errors in the values of the function at the nodes which are used. E.g., in the case of the linear approximation (1)
+
\frac{1}{h}
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950127.png" /></td> <td valign="top" style="width:5%;text-align:right;">(4)</td></tr></table>
+
[ f ( x _ {k+} 1 ) - f ( x _ {k} ) ] ,
 +
$$
  
and the remainder <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950128.png" /> has the representation
+
and the remainder $  R _ {1} ^ { \prime } ( x) $
 +
has the representation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950129.png" /></td> </tr></table>
+
$$
 +
R _ {1} ^ { \prime } ( x _ {k} )  = \
 +
-
 +
\frac{1}{2}
 +
f ^ { \prime\prime } ( \xi ) h ,\ \
 +
\xi \in ( x _ {k} , x _ {k+} 1 ) ,\ \
 +
h = x _ {k+} 1 - x _ {k} .
 +
$$
  
If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950130.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950131.png" /> are known with respective errors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950132.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950133.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950134.png" />, then the error in (4) will contain yet another term <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950135.png" />, which decreases as the step <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950136.png" /> increases. When using formulas for numerical differentiation, the interpolation step must be in accordance with the errors in the values of the functions. Therefore, in practice it often happens that a function known to have some error on a dense grid is interpolated on a more coarse grid only.
+
If $  f ( x _ {k+} 1 ) $
 +
and $  f ( x _ {k} ) $
 +
are known with respective errors $  \epsilon _ {k+} 1 $,  
 +
$  \epsilon _ {k} $,  
 +
$  \epsilon _ {k+} 1 \neq \epsilon _ {k} $,  
 +
then the error in (4) will contain yet another term $  ( \epsilon _ {k+} 1 - \epsilon _ {k} ) / h $,  
 +
which decreases as the step $  h $
 +
increases. When using formulas for numerical differentiation, the interpolation step must be in accordance with the errors in the values of the functions. Therefore, in practice it often happens that a function known to have some error on a dense grid is interpolated on a more coarse grid only.
  
 
==Numerically solving integral equations.==
 
==Numerically solving integral equations.==
The unknown function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950137.png" /> in the integral equation is replaced by some interpolation formula (an interpolation polynomial, an interpolation spline, etc.) with nodes <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950138.png" />, and an approximate value <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950139.png" /> is determined from the system obtained after replacing the independent variable <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950140.png" /> by the nodes <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950141.png" />.
+
The unknown function $  \phi ( x) $
 +
in the integral equation is replaced by some interpolation formula (an interpolation polynomial, an interpolation spline, etc.) with nodes $  x _ {k} $,  
 +
and an approximate value $  \phi ( x _ {k} ) $
 +
is determined from the system obtained after replacing the independent variable $  x $
 +
by the nodes $  x _ {k} $.
  
 
E.g., for the linear Fredholm integral equation of the second kind
 
E.g., for the linear Fredholm integral equation of the second kind
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950142.png" /></td> <td valign="top" style="width:5%;text-align:right;">(5)</td></tr></table>
+
$$ \tag{5 }
 +
\phi ( x)  = \lambda \int\limits _ { a } ^ { b }  K ( x , s ) \phi ( s)  d s + f ( x)
 +
$$
  
 
one can use the Lagrange form of the interpolation polynomial,
 
one can use the Lagrange form of the interpolation polynomial,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950143.png" /></td> </tr></table>
+
$$
 +
\phi ( x)  = \
 +
\sum _ { k= } 0 ^ { n }
 +
l _ {k} ( x) \phi ( x _ {k} ) + R _ {n} ( x)  \equiv  L _ {n} ( x) + R _ {n} ( x) ,
 +
$$
 +
 
 +
where  $  R _ {n} ( x) $
 +
is the remainder and
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950144.png" /> is the remainder and
+
$$
 +
l _ {k} ( x)  = \
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950145.png" /></td> </tr></table>
+
\frac{\omega _ {n} ( x) ( x - x _ {k} ) }{\omega _ {n}  ^  \prime  ( x _ {k} ) }
 +
,\ \
 +
\omega _ {n} ( x)  = \prod _ { k= } 0 ^ { n }  ( x - x _ {k} ) .
 +
$$
  
Replacing <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950146.png" /> in (5) by its interpolation polynomial <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950147.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950148.png" /> by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950149.png" />, one obtains the linear system of equations
+
Replacing $  \phi ( x) $
 +
in (5) by its interpolation polynomial $  L _ {n} ( x) $
 +
and $  x $
 +
by $  x _ {i} $,  
 +
one obtains the linear system of equations
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950150.png" /></td> </tr></table>
+
$$
 +
\phi _ {i}  = \lambda
 +
\sum _ { k= } 0 ^ { n }
 +
M _ {k} ( x _ {i} ) \phi _ {k} + f ( x _ {i} ) ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950151.png" /></td> </tr></table>
+
$$
 +
M _ {k} ( x _ {i} )  = \int\limits _ { a } ^ { b }  K ( x _ {i} , s ) l _ {k} ( s)  d s ,\  i = 0 \dots n ,
 +
$$
  
for determining the approximate value <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950152.png" /> of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950153.png" /> at <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950154.png" />. In the case of non-linear integral equations the approximate value <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950155.png" /> has to be determined from the corresponding non-linear system.
+
for determining the approximate value $  \phi _ {i} $
 +
of $  \phi ( x) $
 +
at $  x _ {i} $.  
 +
In the case of non-linear integral equations the approximate value $  \phi _ {i} $
 +
has to be determined from the corresponding non-linear system.
  
 
==Numerically solving differential equations.==
 
==Numerically solving differential equations.==
 
One of the possibilities for constructing numerical methods for solving differential equations consists of replacing the derivatives of the unknown functions by interpolation formulas for numerical differentiation, and in a number of cases also in replacing by interpolation other functions and expressions occurring in the equation.
 
One of the possibilities for constructing numerical methods for solving differential equations consists of replacing the derivatives of the unknown functions by interpolation formulas for numerical differentiation, and in a number of cases also in replacing by interpolation other functions and expressions occurring in the equation.
  
Suppose one has the following formula for numerical differentiation with equally-spaced nodes <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950156.png" />:
+
Suppose one has the following formula for numerical differentiation with equally-spaced nodes $  x _ {k} = x _ {0} + n h $:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950157.png" /></td> <td valign="top" style="width:5%;text-align:right;">(6)</td></tr></table>
+
$$ \tag{6 }
 +
y  ^  \prime  ( x _ {k} )
 +
\approx \
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950158.png" /></td> </tr></table>
+
\frac{y ( x _ {k+} 1 ) - y ( x _ {k} ) }{h}
 +
  = \
 +
 
 +
\frac{1}{h}
 +
\Delta y ( x _ {k} ) ,
 +
$$
 +
 
 +
$$
 +
y  ^ {\prime\prime} ( x _ {k} )  \approx 
 +
\frac{1}{h  ^ {2} }
 +
[ y ( x _ {k-} 1 ) - 2 y ( x _ {k} ) + y ( x _ {k+} 1 ) ]
 +
=
 +
\frac{1}{h  ^ {2} }
 +
\Delta  ^ {2} y ( x _ {k} ) ,
 +
$$
  
 
obtained by differentiation of the formulas of linear and quadratic interpolation (1) and (2), respectively. Then for a second-order ordinary differential equation
 
obtained by differentiation of the formulas of linear and quadratic interpolation (1) and (2), respectively. Then for a second-order ordinary differential equation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950159.png" /></td> </tr></table>
+
$$
 +
F ( x , y , y  ^  \prime  , y  ^ {\prime\prime} )  = 0
 +
$$
  
 
one obtains, under certain additional conditions, taking (6) into account, the finite-difference equation
 
one obtains, under certain additional conditions, taking (6) into account, the finite-difference equation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950160.png" /></td> </tr></table>
+
$$
 +
F
 +
\left (
 +
x _ {k} , y _ {k} ,\
  
In it, together with the equations obtained from the additional conditions, the approximate values <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950161.png" /> of the solution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950162.png" /> at nodes <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950163.png" /> occur.
+
\frac{1}{h}
 +
\Delta y _ {k} ,\
 +
 
 +
\frac{1}{h  ^ {2} }
 +
\Delta  ^ {2} y _ {k} \right )  =  0 .
 +
$$
 +
 
 +
In it, together with the equations obtained from the additional conditions, the approximate values $  y _ {k} $
 +
of the solution $  y ( x) $
 +
at nodes $  x _ {k} $
 +
occur.
  
 
Often, reduction of partial differential equations to the corresponding finite-difference equations is often also carried out using formulas for numerical differentiation.
 
Often, reduction of partial differential equations to the corresponding finite-difference equations is often also carried out using formulas for numerical differentiation.
Line 171: Line 464:
 
Interpolation methods are applied to solve differential equations written in integral form. E.g., to find an approximate solution of the Cauchy problem
 
Interpolation methods are applied to solve differential equations written in integral form. E.g., to find an approximate solution of the Cauchy problem
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950164.png" /></td> <td valign="top" style="width:5%;text-align:right;">(7)</td></tr></table>
+
$$ \tag{7 }
 +
y  ^  \prime  = f ( x , y ) ,\ \
 +
y ( x _ {0} )  = y _ {0} ,
 +
$$
  
at points <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950165.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950166.png" /> one uses difference formulas of the form
+
at points $  x _ {k} = x _ {0} + k h $,  
 +
$  k = 0 , 1 \dots $
 +
one uses difference formulas of the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950167.png" /></td> </tr></table>
+
$$
 +
y _ {n+} 1  = y _ {n} + h
 +
\sum _ { j= } 1 ^ { n }
 +
B _ {j} f ( x _ {n-} j , y _ {n-} j ) ,
 +
$$
  
 
obtained by replacing the integrand in
 
obtained by replacing the integrand in
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950168.png" /></td> </tr></table>
+
$$
 +
y ( x _ {n+} 1 )  = \
 +
y ( x _ {n} ) +
 +
\int\limits _ {x _ {n} } ^ { {x _ n+} 1 } f ( x , y )  d x
 +
$$
  
 
by an interpolation polynomial and subsequent integration. In particular, Adams' formula for first-order equations (7), Störmer's formula for second-order equations, etc., are obtained this way.
 
by an interpolation polynomial and subsequent integration. In particular, Adams' formula for first-order equations (7), Störmer's formula for second-order equations, etc., are obtained this way.
Line 186: Line 492:
  
 
==Interpolation of operators and some general approaches to the construction of numerical methods.==
 
==Interpolation of operators and some general approaches to the construction of numerical methods.==
The construction of numerical methods for solving mathematical problems written as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950169.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950170.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950171.png" /> are elements of certain sets <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950172.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950173.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950174.png" /> is a given operator, consists of replacing <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950175.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950176.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950177.png" />, or only some of these three objects, by other objects that are convenient for calculating purposes. This replacement should be carried out in such a way that the solution to the new problem
+
The construction of numerical methods for solving mathematical problems written as $  A x = y $,  
 +
where $  x $
 +
and $  y $
 +
are elements of certain sets $  X $
 +
and $  Y $
 +
and $  A $
 +
is a given operator, consists of replacing $  X $,  
 +
$  Y $
 +
and $  A $,  
 +
or only some of these three objects, by other objects that are convenient for calculating purposes. This replacement should be carried out in such a way that the solution to the new problem
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950178.png" /></td> </tr></table>
+
$$
 +
\widetilde{y}  = \widetilde{A}  \widetilde{x}  ,
 +
$$
  
consisting of finding <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950179.png" /> or <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950180.png" />, is in some sense close to the solution of the original problem. One of the methods for replacing <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950181.png" /> by an approximation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950182.png" /> is the use of [[Interpolation of operators|interpolation of operators]]. The problem of the interpolation of operators has various formulations. A linear interpolation operator <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950183.png" /> for a given operator is written as
+
consisting of finding $  \widetilde{y}  $
 +
or $  \widetilde{x}  $,  
 +
is in some sense close to the solution of the original problem. One of the methods for replacing $  A $
 +
by an approximation $  \widetilde{A}  $
 +
is the use of [[Interpolation of operators|interpolation of operators]]. The problem of the interpolation of operators has various formulations. A linear interpolation operator $  L _ {1} ( F ;  x ) $
 +
for a given operator is written as
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950184.png" /></td> <td valign="top" style="width:5%;text-align:right;">(8)</td></tr></table>
+
$$ \tag{8 }
 +
L _ {1} ( F ; x )  = \
 +
F ( x _ {0} ) + F ( x _ {0} , x _ {1} ) ( x - x _ {0} ) ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950185.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950186.png" /> are the interpolation nodes and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950187.png" /> is the first-order divided-difference operator. The latter is defined as the linear operator for which
+
where $  x _ {0} $,  
 +
$  x _ {1} $
 +
are the interpolation nodes and $  F( x _ {0} , x _ {1} ) $
 +
is the first-order divided-difference operator. The latter is defined as the linear operator for which
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950188.png" /></td> </tr></table>
+
$$
 +
F ( x _ {0} , x _ {1} ) ( x _ {0} - x _ {1} )  = \
 +
F ( x _ {0} ) - F ( x _ {1} ) .
 +
$$
  
The given definition of the divided-difference operator can be made concrete in a number of cases. Using linear interpolation (8), the  "secant method"  for the equation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950189.png" /> can be written as
+
The given definition of the divided-difference operator can be made concrete in a number of cases. Using linear interpolation (8), the  "secant method"  for the equation $  F ( x) = 0 $
 +
can be written as
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950190.png" /></td> </tr></table>
+
$$
 +
x _ {n+} 1  = x _ {n} -
 +
F ^ { - 1 } ( x _ {n-} 1 , x _ {n} ) F ( x _ {n} ) ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950191.png" /> is the operator inverse to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950192.png" />.
+
where $  F ^ { - 1 } ( x _ {n-} 1 , x _ {n} ) $
 +
is the operator inverse to $  F ( x _ {n-} 1 , x _ {n} ) $.
  
The formulation of the problem of interpolation of functionals, which is of interest in the theory of approximation methods, is as follows. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950193.png" /> be some fixed functionals, or classes of functionals, defined on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950194.png" />. A functional <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950195.png" /> is called an interpolation functional polynomial for a given functional <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950196.png" /> and system of points <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950197.png" /> in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950198.png" /> if the relations
+
The formulation of the problem of interpolation of functionals, which is of interest in the theory of approximation methods, is as follows. Let $  \{ \psi _ {i} ( x) \} _ {i=} 0 ^ {n} $
 +
be some fixed functionals, or classes of functionals, defined on $  X $.  
 +
A functional $  L _ {n} [ F ;  x ] $
 +
is called an interpolation functional polynomial for a given functional $  F ( x) $
 +
and system of points $  \{ x _ {k} \} $
 +
in $  X $
 +
if the relations
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950199.png" /></td> </tr></table>
+
$$
 +
L _ {n} [ F ; x _ {k} ]  = F ( x _ {k} ) ,\ \
 +
L _ {n} [ \psi _ {i} ; x ]  \equiv  \psi _ {i} ( x) ,
 +
\  i = 0 \dots n ,
 +
$$
  
 
hold.
 
hold.
Line 214: Line 560:
 
E.g., approximate interpolation formulas for computing continual integrals have the form
 
E.g., approximate interpolation formulas for computing continual integrals have the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950200.png" /></td> </tr></table>
+
$$
 +
\int\limits _ { X } F ( x)  d \mu ( x)  \approx \
 +
\int\limits _ { X } L _ {n} [ F ; x ]  d \mu ( x) ,
 +
$$
  
where the integral over the interpolation polynomial <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950201.png" /> with respect to a certain measure <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950202.png" /> can be computed exactly or can be reduced to a finite-dimensional integral. When <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950203.png" /> is the space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950204.png" /> of continuous functions on an interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950205.png" />, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950206.png" /> can be represented by a Stieltjes integral,
+
where the integral over the interpolation polynomial $  L _ {n} [ F ;  x ] $
 +
with respect to a certain measure $  \mu $
 +
can be computed exactly or can be reduced to a finite-dimensional integral. When $  X $
 +
is the space $  C [ a , b ] $
 +
of continuous functions on an interval $  [ a , b ] $,  
 +
then $  L _ {1} [ F ;  x ] $
 +
can be represented by a Stieltjes integral,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950207.png" /></td> </tr></table>
+
$$
 +
L _ {1} [ F ; x ] =
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950208.png" /></td> </tr></table>
+
$$
 +
= \
 +
F ( x _ {0} ) + \int\limits _ { a } ^ { b } 
 +
\frac{x ( \tau ) - x _ {0} ( \tau ) }{x _ {1} ( \tau ) - x _ {0} ( \tau )
 +
}
 +
  d \tau F [ x _ {0} ( \cdot ) + \chi ( \cdot ,\
 +
\tau ) ( x _ {1} ( \cdot ) - x ( \cdot ) ) ] ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950209.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950210.png" /> are the interpolation nodes while
+
where $  x _ {0} ( t) $,  
 +
$  x _ {1} ( t) $
 +
are the interpolation nodes while
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950211.png" /></td> </tr></table>
+
$$
 +
\chi ( t , \tau )  = \
 +
\left \{
 +
\begin{array}{ll}
 +
1,  & \tau > t ,  \\
 +
0,  & \tau \leq  t . \\
 +
\end{array}
  
If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950212.png" /> is a constant or a linear functional, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950213.png" />.
+
\right .$$
  
The use of interpolation in finding extremal values of functionals may be illustrated by two interpolation analogues of the gradient method for finding a local unconditional minimum of a functional <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950214.png" /> that is defined on some Hilbert space. The first analogue is obtained if, in the gradient method, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950215.png" /> is replaced by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950216.png" />, i.e.
+
If  $  F ( x) $
 +
is a constant or a linear functional, then  $  L _ {1} [ F ;  x ] \equiv F ( x) $.
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950217.png" /></td> <td valign="top" style="width:5%;text-align:right;">(9)</td></tr></table>
+
The use of interpolation in finding extremal values of functionals may be illustrated by two interpolation analogues of the gradient method for finding a local unconditional minimum of a functional  $  F ( x) $
 +
that is defined on some Hilbert space. The first analogue is obtained if, in the gradient method,  $  \mathop{\rm grad}  F ( x _ {n} ) $
 +
is replaced by  $  F ( x _ {n-} 1 , x _ {n} ) $,
 +
i.e.
  
The second analogue uses the gradient of the interpolation polynomials. From approximations <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950218.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950219.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950220.png" /> to an extremum <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950221.png" /> of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950222.png" /> one constructs the quadratic interpolation polynomial
+
$$ \tag{9 }
 +
x _ {n+} 1  = x _ {n} -
 +
\epsilon _ {n} F ( x _ {n-} 1 , x _ {n} ) ,\ \
 +
\epsilon _ {n} > 0 ,\ \
 +
n = 1 , 2 ,\dots .
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950223.png" /></td> </tr></table>
+
The second analogue uses the gradient of the interpolation polynomials. From approximations  $  x _ {n-} 2 $,
 +
$  x _ {n-} 1 $,
 +
$  x _ {n} $
 +
to an extremum  $  x  ^ {*} $
 +
of  $  F ( x) $
 +
one constructs the quadratic interpolation polynomial
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950224.png" /></td> </tr></table>
+
$$
 +
L _ {2} [ F ; x ]  = \
 +
F ( x _ {n} ) + F ( x _ {n-} 1 , x _ {n} ) ( x - x _ {n} ) +
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950225.png" /> is the second-order divided difference of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950226.png" /> with respect to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950227.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950228.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950229.png" />. The new approximation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950230.png" /> is determined by
+
$$
 +
+
 +
F ( x _ {n-} 2 , x _ {n-} 1 , x _ {n} ) ( x - x _ {n-} 1 ) ( x - x _ {n} ) ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950231.png" /></td> <td valign="top" style="width:5%;text-align:right;">(10)</td></tr></table>
+
where  $  F ( x _ {n-} 2 , x _ {n-} 1 , x _ {n} ) $
 +
is the second-order divided difference of  $  F ( x) $
 +
with respect to  $  x _ {n-} 2 $,
 +
$  x _ {n-} 1 $,
 +
$  x _ {n} $.
 +
The new approximation  $  x _ {n+} 1 $
 +
is determined by
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051950/i051950232.png" /></td> </tr></table>
+
$$ \tag{10 }
 +
x _ {n+} 1  = x _ {n} -
 +
\epsilon _ {n}  \mathop{\rm grad}  L _ {2} [ F ; x _ {n} ] ,
 +
$$
 +
 
 +
$$
 +
\epsilon _ {n}  > 0 ,\  n  = 2 , 3 ,\dots .
 +
$$
  
 
The interpolation methods (9), (10) use two, respectively, there, initial approximations.
 
The interpolation methods (9), (10) use two, respectively, there, initial approximations.
Line 250: Line 655:
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  V.L. Goncharov,  "The theory of interpolation and approximation of functions" , Moscow  (1954)  (In Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  I.S. Berezin,  N.P. Zhidkov,  "Computing methods" , Pergamon  (1973)  (Translated from Russian)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  N.S. Bakhvalov,  "Numerical methods: analysis, algebra, ordinary differential equations" , MIR  (1977)  (Translated from Russian)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  V.I. Krylov,  V.V. Bobkov,  P.I. Monastyrnyi,  "Numerical methods" , '''1–2''' , Moscow  (1976–1977)  (In Russian)</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top">  S.B. Stechkin,  Yu.N. Subbotin,  "Splines in numerical mathematics" , Moscow  (1976)  (In Russian)</TD></TR><TR><TD valign="top">[6]</TD> <TD valign="top">  A.A. Samarskii,  "Theorie der Differenzverfahren" , Akad. Verlagsgesell. Geest u. Portig K.-D.  (1984)  (Translated from Russian)</TD></TR><TR><TD valign="top">[7]</TD> <TD valign="top">  L.A. Yanovich,  "Approximate computation of continual integrals with respect to Gaussian measures" , Minsk  (1976)  (In Russian)</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  V.L. Goncharov,  "The theory of interpolation and approximation of functions" , Moscow  (1954)  (In Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  I.S. Berezin,  N.P. Zhidkov,  "Computing methods" , Pergamon  (1973)  (Translated from Russian)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  N.S. Bakhvalov,  "Numerical methods: analysis, algebra, ordinary differential equations" , MIR  (1977)  (Translated from Russian)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  V.I. Krylov,  V.V. Bobkov,  P.I. Monastyrnyi,  "Numerical methods" , '''1–2''' , Moscow  (1976–1977)  (In Russian)</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top">  S.B. Stechkin,  Yu.N. Subbotin,  "Splines in numerical mathematics" , Moscow  (1976)  (In Russian)</TD></TR><TR><TD valign="top">[6]</TD> <TD valign="top">  A.A. Samarskii,  "Theorie der Differenzverfahren" , Akad. Verlagsgesell. Geest u. Portig K.-D.  (1984)  (Translated from Russian)</TD></TR><TR><TD valign="top">[7]</TD> <TD valign="top">  L.A. Yanovich,  "Approximate computation of continual integrals with respect to Gaussian measures" , Minsk  (1976)  (In Russian)</TD></TR></table>
 
 
  
 
====Comments====
 
====Comments====

Latest revision as of 22:13, 5 June 2020


A method for approximating or precisely finding some quantity by known individual values of it or of other quantities related to it. On the basis of interpolation a whole series of approximate methods for solving mathematical problems has been developed.

Most significant in numerical mathematics is the problem of constructing means for the interpolation of functions. The interpolation of functionals and operators is also widely used in constructing numerical methods.

The approximate representation and calculation of functions.

Interpolation of functions is considered as one of the ways of approximating them. Interpolating a function $ f ( x) $ on a segment $ [ a , b ] $ by its values at the nodes $ x _ {k} $ of a grid $ \Delta _ {n} = \{ a \leq x _ {0} < \dots < x _ {n} \leq b \} $ means constructing another function $ L _ {n} ( x) \equiv L _ {n} ( f ; x ) $ such that $ L _ {n} ( x _ {k} ) = f ( x _ {k} ) $, $ k = 0 \dots n $. In a more general setting, the problem of interpolating a function $ f ( x) $ consists of constructing $ L _ {n} ( x) $ not only by prescribing values on a grid $ \Delta _ {n} $, but also derivatives at individual nodes, up to a certain order, or by describing some other relation connecting $ f ( x) $ and $ L _ {n} ( x) $.

Usually $ L _ {n} ( x) $ is constructed in the form

$$ L _ {n} ( x) = \sum _ { i= } 0 ^ { n } a _ {i} \phi _ {i} ( x) , $$

where $ \{ \phi _ {i} ( x) \} $ is a certain system of linearly independent functions, chosen in advance. Such an interpolation is called linear with respect to $ \{ \phi _ {i} ( x) \} $, while $ L _ {n} ( x) $ is called an interpolation polynomial in the system $ \{ \phi _ {i} ( x) \} $ or an interpolation function.

The choice of $ \{ \phi _ {i} ( x) \} $ is determined by the properties of the function class for which the interpolation formula is constructed. E.g., for the approximation of $ 2 \pi $- periodic functions on $ [ 0 , 2 \pi ] $ one naturally takes the trigonometric system for $ \{ \phi _ {i} ( x) \} $, for the approximation of bounded or increasing functions on $ [ 0 , \infty ) $ one takes the system of rational or exponential functions, taking into account the behaviour of the functions to be approximated at infinity, etc.

Most often one uses algebraic interpolation: $ \phi _ {i} ( x) = x ^ {i} $; its simplest variant (linear interpolation with two nodes $ x _ {k} $ and $ x _ {k+} 1 $) is defined by the formula

$$ \tag{1 } L _ {1} ( x) = \ \frac{x - x _ {k} }{x _ {k+} 1 - x _ {k} } [ f ( x _ {k+} 1 ) - f ( x _ {k} ) ] + f ( x _ {k} ) , $$

$$ x _ {k} \leq x \leq x _ {k+} 1 . $$

Algebraic interpolation of a very high order is seldom used in practice in the problem of approximating functions on an entire segment $ [ a , b ] $. One usually restricts oneself to linear interpolation by (1) or to quadratic interpolation with three nodes on particular segments of the grid, by the formula

$$ \tag{2 } L _ {2} ( x) = \ \frac{( x - x _ {k} ) ( x - x _ {k+} 1 ) }{( x _ {k-} 1 - x _ {k} ) ( x _ {k-} 1 - x _ {k+} 1 ) } f ( x _ {k-} 1 ) + $$

$$ \frac{( x - x _ {k-} 1 ) ( x - x _ {k+} 1 ) }{( x _ {k} - x _ {k-} 1 ) ( x _ {k} - x _ {k+} 1 ) } f ( x _ {k} ) + \frac{( x - x _ {k-} 1 ) ( x - x _ {k} ) }{( x _ {k+} 1 - x _ {k-} 1 ) ( x _ {k+} 1 - x _ {k} ) } f ( x _ {k+} 1 ) , $$

$$ x _ {k-} 1 \leq x \leq x _ {k+} 1 . $$

There are several ways of writing the algebraic interpolation polynomials (cf. Interpolation formula). Interpolation by splines gains increasing application (cf. Spline; Interpolation spline)

Parabolic or cubic splines are most often used in practice. An interpolation spline of defect 1 for a function $ f( x) $ with respect to a given grid $ \Delta _ {n} $ is a function $ S _ {3} ( x) \equiv S _ {3} ( f ; x ) $ that is a polynomial of degree three on each segment $ [ x _ {k-} 1 , x _ {k} ] $, belongs to the class of twice continuously-differentiable functions, and satisfies the conditions

$$ S _ {3} ( x _ {k} ) = f ( x _ {k} ) ,\ \ k = 0 \dots n ; \ n \geq 2 . $$

There are still two free parameters in this definition; these are determined by additional boundary conditions: $ S _ {3} ^ {(} i) ( a) = S _ {3} ^ {(} i) ( b) $, $ i = 1 , 2 $; $ S _ {3} ^ \prime ( a) = a _ {n} $, $ S _ {3} ^ \prime ( b) = b _ {n} $, or other conditions.

As well as immediately in the problem of approximating functions, splines are also used in solving other problems; the splines are required then not only to coincide on a grid $ \Delta _ {n} $ with the values of a function $ f ( x) $, but also with those of the derivatives of this function, up to a certain order.

When processing empirical data $ \{ y _ {k} \} $ one often determines the coefficients $ a _ {i} $ in $ L _ {n} ( x) $ by requiring

$$ S = \sum _ { k= } 1 ^ { m } [ y _ {k} - L _ {n} ( x _ {k} ) ] ^ {2} ,\ \ m \geq n , $$

to be minimal. The function $ L _ {n} ( x) $ thus constructed is called the interpolation function in the sense of least squares.

The interpolation of functions in several variables meets with a number of principal and numerical difficulties. E.g., in the case of algebraic interpolation the Lagrange interpolation polynomial of fixed degree need not, generally speaking, exist for an arbitrary system of different nodes. In particular, for a function $ f ( x , y ) $ in two variables such a polynomials $ L _ {n} ( x , y ) $ of total degree at most $ n $ can be constructed for nodes $ ( x _ {k} , y _ {k} ) $ only if these nodes do not lie on an algebraic curve of order $ n $.

Another approach to the interpolation of functions $ f ( x _ {1} \dots x _ {m} ) $ in several variables is that one first interpolates the function with respect to $ x _ {1} $ for fixed $ x _ {k} $, $ k = 2 \dots m $, then with respect to the next variable $ x _ {2} $ for fixed remaining nodes, etc. Now the interpolation polynomial $ L _ {n _ {1} \dots n _ {m} } ( x _ {1} \dots x _ {m} ) $ for $ f ( x _ {1} \dots x _ {m} ) $ with nodes

$$ ( x _ {1} ^ {k _ {1} } \dots x _ {m} ^ {k _ {m} } ) , $$

$$ x _ {j} ^ \nu \neq x _ {j} ^ \mu ,\ \nu \neq \mu ; \ \ k _ {j} = 0 \dots n _ {j} ; \ j = 1 \dots m , $$

has the form:

$$ L _ {n _ {1} \dots n _ {m} } ( x _ {1} \dots x _ {m} ) = $$

$$ \sum _ { k _ {1} \dots k _ {m} = 0 } ^ { {n _ 1 } \dots n _ {m} } \frac{\omega _ {n _ {1} } ( x _ {1} ) \dots \omega _ {n _ {m} } ( x _ {m} ) }{\omega _ {n _ {1} } ^ \prime ( x _ {1} ^ {k _ {1} } ) {} \dots \omega _ {n _ {m} } ^ \prime ( x _ {m} ^ {k _ {m} } ) } \frac{ f ( x _ {1} ^ {k _ {1} } \dots x _ {m} ^ {k _ {m} } ) }{( x _ {1} - x _ {1} ^ {k _ {1} } ) \dots ( x _ {m} - x _ {m} ^ {k _ {m} } ) } , $$

where

$$ \omega _ {n _ {j} } ( x _ {j} ) = \ \prod _ { k _ {j} = 0 } ^ { {n _ j } } ( x _ {j} - x _ {j} ^ {k _ {j} } ) ,\ \ j = 1 \dots m . $$

Interpolation splines for functions of several variables are defined on a multi-dimensional grid, with corresponding changes, in analogy with the one-dimensional case. Interpolation of functions is used for replacing complicate functions by simpler ones that can be calculated quicker; for the approximate recovery of functions on their entire domain of values by their values at individual points; and for obtaining smoother functions described by a running process. This kind of problems is of both independent interest and arises in an auxiliary fashion in many branches of science and technology in solving complex problems. Interpolation of functions is also used in approximately finding limit values of functions, in problems of accelerating the convergence of series or sequences, etc.

Numerically solving systems of non-linear equations.

The general ideas for constructing interpolation methods for solving an equation $ f ( x) = 0 $ or a system of equations $ f _ {i} ( x _ {1} \dots x _ {m} ) = 0 $, $ i = 1 \dots m $, are the same. The difficulties of the problem of interpolating functions of several variables especially arise when investigating and practically using this kind of methods for a large number of equations. The basic principle for obtaining interpolation methods for solving an equation $ f( x)= 0 $ is the replacement of $ f ( x) $ by its interpolation polynomials $ L _ {n} ( x) $ and subsequently solving $ L _ {n} ( x) = 0 $. The roots of $ L _ {n} ( x) = 0 $ are taken as approximate values of those of $ f ( x) = 0 $. The interpolation polynomial $ L _ {n} ( x) $ is also used in constructing iteration methods for solving $ f( x) = 0 $.

E.g., taking for $ x _ {n+} 1 $ the root of the linear algebraic interpolation polynomial constructed with respect to the values $ f ( x _ {n} ) $ and $ f ^ { \prime } ( x _ {n} ) $ at $ x _ {n} $, or with respect to the values $ f ( x _ {n-} 1 ) $ and $ f ( x _ {n} ) $ at $ x _ {n-} 1 $ and $ x _ {n} $, leads to the method of Newton (cf. Newton method) and the secant method, respectively:

$$ x _ {n+} 1 = x _ {n} - \frac{f ( x _ {n} ) }{f ^ { \prime } ( x _ {n} ) } ,\ \ x _ {n+} 1 = x _ {n} - \frac{f ( x _ {n} ) }{f ( x _ {n-} 1 , x _ {n} ) } , $$

where $ f ( x _ {n-} 1 , x _ {n} ) $ is the divided difference of $ f ( x) $ at $ x _ {n-} 1 $ and $ x _ {n} $. Under certain conditions, the sequence $ \{ x _ {n} \} $ converges, as $ n \rightarrow \infty $, to a solution of $ f ( x) = 0 $.

Another way to construct methods for solving an equation $ f ( x) = 0 $ is based on interpolating the inverse function $ x = g ( y) $. Suppose that for interpolating $ g ( y) $ the algebraic Lagrange interpolation polynomial

$$ L _ {n} ( y) = \ \sum _ { k= } 0 ^ { n } \frac{\omega _ {n} ( y) }{\omega _ {n} ^ \prime ( y _ {k} ) ( y - y _ {k} ) } g ( y _ {k} ) ,\ \ \omega _ {n} ( y) = \ \prod _ { k= } 0 ^ { n } ( y - y _ {k} ) , $$

is taken. It is assumed that the inverse function exists in a neighbourhood of the required root of $ f ( x) = 0 $ and that a table of values $ x _ {k} $ and $ y _ {k} = f ( x _ {k} ) $, $ k = 0 \dots n $, is available. The next approximate value $ x _ {n+} 1 $ is the value of the interpolation polynomial at zero:

$$ x _ {n+} 1 = - \omega _ {n} ( 0) \sum _ { k= } 0 ^ { n } \frac{x _ {k} }{\omega _ {n} ^ \prime ( y _ {k} ) y _ {k} } . $$

Numerical integration.

The apparatus of interpolation lies at the basis of constructing a lot of quadrature and cubature formulas. Such formulas are constructed by replacing the functions in the integrands on the entire domain, or on a part of it, by interpolation polynomials of a certain kind and integrating these.

E.g., quadrature formulas of highest algebraic accuracy, also called Gauss quadrature formulas (cf. Gauss quadrature formula)

$$ \int\limits _ { a } ^ { b } p ( x) f ( x) d x \approx \ \sum _ { k= } 1 ^ { n } A _ {k} f ( x _ {k} ) , $$

where $ p ( x) $ is a sign-definite weight function, are obtained by replacing $ f ( x) $ by algebraic interpolation polynomials constructed with respect to the roots $ x _ {k} $ of the polynomial of degree $ n $ that is orthogonal to $ p ( x) $.

If one partitions the complete integration interval $ [ a , b ] $ in an even number $ n $ of equal parts of length $ h = ( b - a ) / n $ and if on each double part one replaces $ f ( x) $ by its quadratic interpolation polynomials with nodes at the end and middle points, then one is led to the compound Simpson formula

$$ \int\limits _ { a } ^ { b } f ( x) d x \approx \ \frac{b - a }{3 n } \times $$

$$ \times [ f _ {0} + f _ {n} + 2 ( f _ {2} + f _ {4} + \dots + f _ {n-} 2 ) + 4 ( f _ {1} + f _ {3} + \dots + f _ {n-} 1 ) ] , $$

where $ f _ {k} = f ( a + k h ) $, $ k = 0 \dots n $.

One may also take interpolation splines of some fixed degree as basis. The scheme for constructing formulas for the approximate computation of integrals explained above can be used in the multi-dimensional case as well.

Numerical differentiation.

Formulas for numerical differentiation are obtained as results of differentiating interpolation formulas. Here, as a rule, certain a priori information is available about the function to be differentiated, related to its smoothness.

Let $ L _ {n} ( x) $ be a certain interpolation polynomial of a function $ f ( x) $, and let $ R _ {n} ( x) $ be the remainder in the interpolation formula

$$ f ( x) = L _ {n} ( x) + R _ {n} ( x) . $$

If in

$$ f ^ { ( i) } ( x) = L _ {n} ^ {(} i) ( x) + R _ {n} ^ { ( i) } ( x) $$

the quantity $ R _ {n} ^ { ( i) } ( x) $ is neglected, one obtains a formula for the approximate computation of the $ i $- th derivative of $ f ( x) $:

$$ \tag{3 } f ^ { ( i) } ( x) \approx L _ {n} ^ {(} i) ( x) . $$

Formulas for numerical differentiation, with interpolation at their basis, are obtained from (3) depending on the choice of $ L _ {n} ( x) $. For numerical differentiation one uses, as a rule, approximate values of the function at nodes; the error in a formula of numerical differentiation not only depends on the manner of interpolating and the interpolation step, but also on the errors in the values of the function at the nodes which are used. E.g., in the case of the linear approximation (1)

$$ \tag{4 } f ^ { \prime } ( x _ {k} ) \approx \ \frac{1}{h} [ f ( x _ {k+} 1 ) - f ( x _ {k} ) ] , $$

and the remainder $ R _ {1} ^ { \prime } ( x) $ has the representation

$$ R _ {1} ^ { \prime } ( x _ {k} ) = \ - \frac{1}{2} f ^ { \prime\prime } ( \xi ) h ,\ \ \xi \in ( x _ {k} , x _ {k+} 1 ) ,\ \ h = x _ {k+} 1 - x _ {k} . $$

If $ f ( x _ {k+} 1 ) $ and $ f ( x _ {k} ) $ are known with respective errors $ \epsilon _ {k+} 1 $, $ \epsilon _ {k} $, $ \epsilon _ {k+} 1 \neq \epsilon _ {k} $, then the error in (4) will contain yet another term $ ( \epsilon _ {k+} 1 - \epsilon _ {k} ) / h $, which decreases as the step $ h $ increases. When using formulas for numerical differentiation, the interpolation step must be in accordance with the errors in the values of the functions. Therefore, in practice it often happens that a function known to have some error on a dense grid is interpolated on a more coarse grid only.

Numerically solving integral equations.

The unknown function $ \phi ( x) $ in the integral equation is replaced by some interpolation formula (an interpolation polynomial, an interpolation spline, etc.) with nodes $ x _ {k} $, and an approximate value $ \phi ( x _ {k} ) $ is determined from the system obtained after replacing the independent variable $ x $ by the nodes $ x _ {k} $.

E.g., for the linear Fredholm integral equation of the second kind

$$ \tag{5 } \phi ( x) = \lambda \int\limits _ { a } ^ { b } K ( x , s ) \phi ( s) d s + f ( x) $$

one can use the Lagrange form of the interpolation polynomial,

$$ \phi ( x) = \ \sum _ { k= } 0 ^ { n } l _ {k} ( x) \phi ( x _ {k} ) + R _ {n} ( x) \equiv L _ {n} ( x) + R _ {n} ( x) , $$

where $ R _ {n} ( x) $ is the remainder and

$$ l _ {k} ( x) = \ \frac{\omega _ {n} ( x) ( x - x _ {k} ) }{\omega _ {n} ^ \prime ( x _ {k} ) } ,\ \ \omega _ {n} ( x) = \prod _ { k= } 0 ^ { n } ( x - x _ {k} ) . $$

Replacing $ \phi ( x) $ in (5) by its interpolation polynomial $ L _ {n} ( x) $ and $ x $ by $ x _ {i} $, one obtains the linear system of equations

$$ \phi _ {i} = \lambda \sum _ { k= } 0 ^ { n } M _ {k} ( x _ {i} ) \phi _ {k} + f ( x _ {i} ) , $$

$$ M _ {k} ( x _ {i} ) = \int\limits _ { a } ^ { b } K ( x _ {i} , s ) l _ {k} ( s) d s ,\ i = 0 \dots n , $$

for determining the approximate value $ \phi _ {i} $ of $ \phi ( x) $ at $ x _ {i} $. In the case of non-linear integral equations the approximate value $ \phi _ {i} $ has to be determined from the corresponding non-linear system.

Numerically solving differential equations.

One of the possibilities for constructing numerical methods for solving differential equations consists of replacing the derivatives of the unknown functions by interpolation formulas for numerical differentiation, and in a number of cases also in replacing by interpolation other functions and expressions occurring in the equation.

Suppose one has the following formula for numerical differentiation with equally-spaced nodes $ x _ {k} = x _ {0} + n h $:

$$ \tag{6 } y ^ \prime ( x _ {k} ) \approx \ \frac{y ( x _ {k+} 1 ) - y ( x _ {k} ) }{h} = \ \frac{1}{h} \Delta y ( x _ {k} ) , $$

$$ y ^ {\prime\prime} ( x _ {k} ) \approx \frac{1}{h ^ {2} } [ y ( x _ {k-} 1 ) - 2 y ( x _ {k} ) + y ( x _ {k+} 1 ) ] = \frac{1}{h ^ {2} } \Delta ^ {2} y ( x _ {k} ) , $$

obtained by differentiation of the formulas of linear and quadratic interpolation (1) and (2), respectively. Then for a second-order ordinary differential equation

$$ F ( x , y , y ^ \prime , y ^ {\prime\prime} ) = 0 $$

one obtains, under certain additional conditions, taking (6) into account, the finite-difference equation

$$ F \left ( x _ {k} , y _ {k} ,\ \frac{1}{h} \Delta y _ {k} ,\ \frac{1}{h ^ {2} } \Delta ^ {2} y _ {k} \right ) = 0 . $$

In it, together with the equations obtained from the additional conditions, the approximate values $ y _ {k} $ of the solution $ y ( x) $ at nodes $ x _ {k} $ occur.

Often, reduction of partial differential equations to the corresponding finite-difference equations is often also carried out using formulas for numerical differentiation.

Interpolation methods are applied to solve differential equations written in integral form. E.g., to find an approximate solution of the Cauchy problem

$$ \tag{7 } y ^ \prime = f ( x , y ) ,\ \ y ( x _ {0} ) = y _ {0} , $$

at points $ x _ {k} = x _ {0} + k h $, $ k = 0 , 1 \dots $ one uses difference formulas of the form

$$ y _ {n+} 1 = y _ {n} + h \sum _ { j= } 1 ^ { n } B _ {j} f ( x _ {n-} j , y _ {n-} j ) , $$

obtained by replacing the integrand in

$$ y ( x _ {n+} 1 ) = \ y ( x _ {n} ) + \int\limits _ {x _ {n} } ^ { {x _ n+} 1 } f ( x , y ) d x $$

by an interpolation polynomial and subsequent integration. In particular, Adams' formula for first-order equations (7), Störmer's formula for second-order equations, etc., are obtained this way.

This approach makes it possible to construct numerical algorithms for a wide class of differential equations, including partial differential equations. The study of the solvability, exactness and stability of solutions of finite-difference equations that arise constitutes a fundamental and difficult part of the theory of numerically solving differential equations.

Interpolation of operators and some general approaches to the construction of numerical methods.

The construction of numerical methods for solving mathematical problems written as $ A x = y $, where $ x $ and $ y $ are elements of certain sets $ X $ and $ Y $ and $ A $ is a given operator, consists of replacing $ X $, $ Y $ and $ A $, or only some of these three objects, by other objects that are convenient for calculating purposes. This replacement should be carried out in such a way that the solution to the new problem

$$ \widetilde{y} = \widetilde{A} \widetilde{x} , $$

consisting of finding $ \widetilde{y} $ or $ \widetilde{x} $, is in some sense close to the solution of the original problem. One of the methods for replacing $ A $ by an approximation $ \widetilde{A} $ is the use of interpolation of operators. The problem of the interpolation of operators has various formulations. A linear interpolation operator $ L _ {1} ( F ; x ) $ for a given operator is written as

$$ \tag{8 } L _ {1} ( F ; x ) = \ F ( x _ {0} ) + F ( x _ {0} , x _ {1} ) ( x - x _ {0} ) , $$

where $ x _ {0} $, $ x _ {1} $ are the interpolation nodes and $ F( x _ {0} , x _ {1} ) $ is the first-order divided-difference operator. The latter is defined as the linear operator for which

$$ F ( x _ {0} , x _ {1} ) ( x _ {0} - x _ {1} ) = \ F ( x _ {0} ) - F ( x _ {1} ) . $$

The given definition of the divided-difference operator can be made concrete in a number of cases. Using linear interpolation (8), the "secant method" for the equation $ F ( x) = 0 $ can be written as

$$ x _ {n+} 1 = x _ {n} - F ^ { - 1 } ( x _ {n-} 1 , x _ {n} ) F ( x _ {n} ) , $$

where $ F ^ { - 1 } ( x _ {n-} 1 , x _ {n} ) $ is the operator inverse to $ F ( x _ {n-} 1 , x _ {n} ) $.

The formulation of the problem of interpolation of functionals, which is of interest in the theory of approximation methods, is as follows. Let $ \{ \psi _ {i} ( x) \} _ {i=} 0 ^ {n} $ be some fixed functionals, or classes of functionals, defined on $ X $. A functional $ L _ {n} [ F ; x ] $ is called an interpolation functional polynomial for a given functional $ F ( x) $ and system of points $ \{ x _ {k} \} $ in $ X $ if the relations

$$ L _ {n} [ F ; x _ {k} ] = F ( x _ {k} ) ,\ \ L _ {n} [ \psi _ {i} ; x ] \equiv \psi _ {i} ( x) , \ i = 0 \dots n , $$

hold.

Interpolation of functionals is used in the construction of approximate methods for computing continual integrals, in finding extrema of functionals, and in a number of other cases.

E.g., approximate interpolation formulas for computing continual integrals have the form

$$ \int\limits _ { X } F ( x) d \mu ( x) \approx \ \int\limits _ { X } L _ {n} [ F ; x ] d \mu ( x) , $$

where the integral over the interpolation polynomial $ L _ {n} [ F ; x ] $ with respect to a certain measure $ \mu $ can be computed exactly or can be reduced to a finite-dimensional integral. When $ X $ is the space $ C [ a , b ] $ of continuous functions on an interval $ [ a , b ] $, then $ L _ {1} [ F ; x ] $ can be represented by a Stieltjes integral,

$$ L _ {1} [ F ; x ] = $$

$$ = \ F ( x _ {0} ) + \int\limits _ { a } ^ { b } \frac{x ( \tau ) - x _ {0} ( \tau ) }{x _ {1} ( \tau ) - x _ {0} ( \tau ) } d \tau F [ x _ {0} ( \cdot ) + \chi ( \cdot ,\ \tau ) ( x _ {1} ( \cdot ) - x ( \cdot ) ) ] , $$

where $ x _ {0} ( t) $, $ x _ {1} ( t) $ are the interpolation nodes while

$$ \chi ( t , \tau ) = \ \left \{ \begin{array}{ll} 1, & \tau > t , \\ 0, & \tau \leq t . \\ \end{array} \right .$$

If $ F ( x) $ is a constant or a linear functional, then $ L _ {1} [ F ; x ] \equiv F ( x) $.

The use of interpolation in finding extremal values of functionals may be illustrated by two interpolation analogues of the gradient method for finding a local unconditional minimum of a functional $ F ( x) $ that is defined on some Hilbert space. The first analogue is obtained if, in the gradient method, $ \mathop{\rm grad} F ( x _ {n} ) $ is replaced by $ F ( x _ {n-} 1 , x _ {n} ) $, i.e.

$$ \tag{9 } x _ {n+} 1 = x _ {n} - \epsilon _ {n} F ( x _ {n-} 1 , x _ {n} ) ,\ \ \epsilon _ {n} > 0 ,\ \ n = 1 , 2 ,\dots . $$

The second analogue uses the gradient of the interpolation polynomials. From approximations $ x _ {n-} 2 $, $ x _ {n-} 1 $, $ x _ {n} $ to an extremum $ x ^ {*} $ of $ F ( x) $ one constructs the quadratic interpolation polynomial

$$ L _ {2} [ F ; x ] = \ F ( x _ {n} ) + F ( x _ {n-} 1 , x _ {n} ) ( x - x _ {n} ) + $$

$$ + F ( x _ {n-} 2 , x _ {n-} 1 , x _ {n} ) ( x - x _ {n-} 1 ) ( x - x _ {n} ) , $$

where $ F ( x _ {n-} 2 , x _ {n-} 1 , x _ {n} ) $ is the second-order divided difference of $ F ( x) $ with respect to $ x _ {n-} 2 $, $ x _ {n-} 1 $, $ x _ {n} $. The new approximation $ x _ {n+} 1 $ is determined by

$$ \tag{10 } x _ {n+} 1 = x _ {n} - \epsilon _ {n} \mathop{\rm grad} L _ {2} [ F ; x _ {n} ] , $$

$$ \epsilon _ {n} > 0 ,\ n = 2 , 3 ,\dots . $$

The interpolation methods (9), (10) use two, respectively, there, initial approximations.

The use of interpolation of operators and functionals in the construction of computational algorithms for solving concrete problems is based on the use of interpolation formulas with a small error. Such a kind or formulas must be constructed for individual classes of functionals and operators, taking specific features of these classes into account.

References

[1] V.L. Goncharov, "The theory of interpolation and approximation of functions" , Moscow (1954) (In Russian)
[2] I.S. Berezin, N.P. Zhidkov, "Computing methods" , Pergamon (1973) (Translated from Russian)
[3] N.S. Bakhvalov, "Numerical methods: analysis, algebra, ordinary differential equations" , MIR (1977) (Translated from Russian)
[4] V.I. Krylov, V.V. Bobkov, P.I. Monastyrnyi, "Numerical methods" , 1–2 , Moscow (1976–1977) (In Russian)
[5] S.B. Stechkin, Yu.N. Subbotin, "Splines in numerical mathematics" , Moscow (1976) (In Russian)
[6] A.A. Samarskii, "Theorie der Differenzverfahren" , Akad. Verlagsgesell. Geest u. Portig K.-D. (1984) (Translated from Russian)
[7] L.A. Yanovich, "Approximate computation of continual integrals with respect to Gaussian measures" , Minsk (1976) (In Russian)

Comments

A detailed theoretical treatment of interpolation theory is provided by the monograph [a2], while applications of interpolation in numerical mathematics can be found in [a1].

References

[a1] F.B. Hildebrand, "Introduction to numerical analysis" , McGraw-Hill (1974)
[a2] P.J. Davis, "Interpolation and approximation" , Dover, reprint (1975) pp. 108–126
[a3] J.C. Mason (ed.) M.G. Cox (ed.) , Algorithms for approximation , Clarendon Press (1987)
How to Cite This Entry:
Interpolation in numerical mathematics. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Interpolation_in_numerical_mathematics&oldid=16272
This article was adapted from an original article by L.A. Yanovich (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article