Namespaces
Variants
Actions

Difference between revisions of "Interpolation formula"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (fixing superscripts)
 
(2 intermediate revisions by 2 users not shown)
Line 1: Line 1:
A formula for the approximate calculation of values of a function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i0519401.png" /> by replacing it by a function
+
<!--
 +
i0519401.png
 +
$#A+1 = 107 n = 0
 +
$#C+1 = 107 : ~/encyclopedia/old_files/data/I051/I.0501940 Interpolation formula
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i0519402.png" /></td> </tr></table>
+
{{TEX|auto}}
 +
{{TEX|done}}
  
that is simple in a certain sense and belongs to a certain class. The parameters <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i0519403.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i0519404.png" />, are chosen in such a way that the values of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i0519405.png" /> coincide with the known values of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i0519406.png" /> on a given set of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i0519407.png" /> distinct values of the argument:
+
A formula for the approximate calculation of values of a function  $  f ( x) $
 +
by replacing it by a function
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i0519408.png" /></td> <td valign="top" style="width:5%;text-align:right;">(1)</td></tr></table>
+
$$
 +
g ( x)  \equiv  g ( x ; a _ {0}, \dots, a _ {n} )
 +
$$
  
This method of approximately representing a function is called [[Interpolation|interpolation]], and the points <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i0519409.png" /> at which (1) should hold are called interpolation nodes. Instead of the simplest condition (1), the values of some quantity related to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194010.png" /> may also be given, e.g. the values of a derivative of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194011.png" /> at interpolation nodes.
+
that is simple in a certain sense and belongs to a certain class. The parameters  $  a _ {i} $,
 +
$  i = 0, \dots, n $,
 +
are chosen in such a way that the values of  $  g( x) $
 +
coincide with the known values of  $  f ( x) $
 +
on a given set of  $  n + 1 $
 +
distinct values of the argument:
 +
 
 +
$$ \tag{1 }
 +
g ( x _ {k} )  =  f ( x _ {k} ) ,\ \
 +
k = 0, \dots, n .
 +
$$
 +
 
 +
This method of approximately representing a function is called [[Interpolation|interpolation]], and the points $  x _ {k} $
 +
at which (1) should hold are called interpolation nodes. Instead of the simplest condition (1), the values of some quantity related to $  f ( x) $
 +
may also be given, e.g. the values of a derivative of $  f ( x) $
 +
at interpolation nodes.
  
 
The method of linear interpolation is the most widespread among the interpolation methods. The approximation is now looked for in the class of (generalized) polynomials
 
The method of linear interpolation is the most widespread among the interpolation methods. The approximation is now looked for in the class of (generalized) polynomials
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194012.png" /></td> <td valign="top" style="width:5%;text-align:right;">(2)</td></tr></table>
+
$$ \tag{2 }
 +
g ( x ; a _ {0}, \dots, a _ {n} )  = \
 +
\sum _ { i= 0} ^ { n }  a _ {i} \phi _ {i} ( x)
 +
$$
  
in some fixed system of functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194013.png" />. In order for the interpolation polynomial (2) to exist for any function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194014.png" /> defined on an interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194015.png" />, and for any choice of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194016.png" /> nodes <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194017.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194018.png" /> if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194019.png" />, it is necessary and sufficient that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194020.png" /> is a [[Chebyshev system|Chebyshev system]] of functions on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194021.png" />. The interpolation polynomial will, moreover, be unique and its coefficients <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194022.png" /> can be found by directly solving (1).
+
in some fixed system of functions $  \phi _ {0} ( x), \dots, \phi _ {n} ( x) $.  
 +
In order for the interpolation polynomial (2) to exist for any function $  f ( x) $
 +
defined on an interval $  [ a , b ] $,  
 +
and for any choice of $  n + 1 $
 +
nodes $  x _ {0}, \dots, x _ {n} \in [ a , b ] $,  
 +
$  x _ {i} \neq x _ {j} $
 +
if i \neq j $,  
 +
it is necessary and sufficient that $  \{ \phi _ {i} ( x) \} $
 +
is a [[Chebyshev system|Chebyshev system]] of functions on $  [ a , b ] $.  
 +
The interpolation polynomial will, moreover, be unique and its coefficients $  a _ {i} $
 +
can be found by directly solving (1).
  
For <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194023.png" /> one often takes: the sequences of powers of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194024.png" />,
+
For $  \{ \phi _ {i} ( x) \} $
 +
one often takes: the sequences of powers of $  x $,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194025.png" /></td> </tr></table>
+
$$
 +
1 , x , x  ^ {2}, \dots
 +
$$
  
 
the sequence of trigonometric functions,
 
the sequence of trigonometric functions,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194026.png" /></td> </tr></table>
+
$$
 +
1 , \sin  x , \cos  x ,\
 +
\sin  2 x , \cos  2 x, \dots
 +
$$
  
 
or the sequence of exponential functions,
 
or the sequence of exponential functions,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194027.png" /></td> </tr></table>
+
$$
 +
1 , e ^ {\alpha _ {1} x } ,\
 +
e ^ {\alpha _ {2} x }, \dots
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194028.png" /> is a sequence of distinct real numbers.
+
where $  \{ \alpha _ {i} \} $
 +
is a sequence of distinct real numbers.
  
 
When interpolating by algebraic polynomials
 
When interpolating by algebraic polynomials
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194029.png" /></td> <td valign="top" style="width:5%;text-align:right;">(3)</td></tr></table>
+
$$ \tag{3 }
 +
\sum _ { i= 0} ^ { n }  a _ {i} x  ^ {i}
 +
$$
  
the system <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194030.png" /> is
+
the system $  \{ \phi _ {i} ( x) \} $
 +
is
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194031.png" /></td> <td valign="top" style="width:5%;text-align:right;">(4)</td></tr></table>
+
$$ \tag{4 }
 +
\phi _ {i} ( x)  = x  ^ {i} ,\ \
 +
i = 0, \dots, n ,
 +
$$
  
 
while (1) has the form
 
while (1) has the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194032.png" /></td> <td valign="top" style="width:5%;text-align:right;">(5)</td></tr></table>
+
$$ \tag{5 }
 +
\sum _ { i= 0} ^ { n }
 +
a _ {i} x _ {k}  ^ {i= \
 +
f ( x _ {k} ) ,\ \
 +
k = 0, \dots, n .
 +
$$
  
 
The system (4) is a Chebyshev system, which ensures the existence and uniqueness of the interpolation polynomial (3). A property of (4) is the possibility of obtaining an explicit representation of the interpolation polynomial (3) without immediately having to solve (5). One of the explicit forms of (3),
 
The system (4) is a Chebyshev system, which ensures the existence and uniqueness of the interpolation polynomial (3). A property of (4) is the possibility of obtaining an explicit representation of the interpolation polynomial (3) without immediately having to solve (5). One of the explicit forms of (3),
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194033.png" /></td> <td valign="top" style="width:5%;text-align:right;">(6)</td></tr></table>
+
$$ \tag{6 }
 +
g _ {n} ( x)  = L _ {n} ( x)  = \
 +
\sum _ { i= 0} ^ { n }
 +
f ( x _ {i} ) \prod _ {j \neq i }
 +
 
 +
\frac{x - x _ {j} }{x _ {i} - x _ {j} }
 +
,
 +
$$
 +
 
 +
is called the Lagrange interpolation polynomial (cf. [[Lagrange interpolation formula|Lagrange interpolation formula]]). If the derivative  $  f ^ { ( n+ 1) } ( x) $
 +
is continuous, the remainder of (6) can be written as
 +
 
 +
$$ \tag{7 }
 +
f ( x) - g _ {n} ( x)  = \
 +
 
 +
\frac{f ^ { ( n + 1 ) } ( \xi ) }{( n + 1 ) ! }
 +
 
 +
\omega _ {n} ( x) ,
 +
$$
  
is called the Lagrange interpolation polynomial (cf. [[Lagrange interpolation formula|Lagrange interpolation formula]]). If the derivative <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194034.png" /> is continuous, the remainder of (6) can be written as
+
$$
 +
\xi  \in  [ y _ {1} , y _ {2} ] ,\  \omega _ {n} ( x) = \prod _ { i= 0} ^ { n }  ( x - x _ {i} ) ;
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194035.png" /></td> <td valign="top" style="width:5%;text-align:right;">(7)</td></tr></table>
+
where  $  y _ {1} = \min ( x _ {0}, \dots, x _ {n} , x ) $;  
 +
$  y _ {2} = \max ( x _ {0}, \dots, x _ {n} , x) $.  
 +
The value of the remainder (7) depends, in particular, on the values of  $  \omega _ {n} ( x) $.  
 +
The choice of interpolation nodes for which  $  \sup _ {[ a , b ] }  | \omega _ {n} ( x) | $
 +
is minimal, is of interest. The distribution of the nodes is optimal in this sense if the roots
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194036.png" /></td> </tr></table>
+
$$
 +
x _ {k}  = \
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194037.png" />; <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194038.png" />. The value of the remainder (7) depends, in particular, on the values of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194039.png" />. The choice of interpolation nodes for which <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194040.png" /> is minimal, is of interest. The distribution of the nodes is optimal in this sense if the roots
+
\frac{b + a }{2}
 +
+
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194041.png" /></td> </tr></table>
+
\frac{b - a }{2}
 +
\
 +
\cos 
 +
\frac{2 k + 1 }{2 n + 2 }
 +
\pi ,\  k = 0, \dots, n ,
 +
$$
  
 
of the polynomial
 
of the polynomial
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194042.png" /></td> </tr></table>
+
$$
 +
\overline{T} _ {n+1} ^ { [ a , b ] } ( x)  = ( b - a )  ^ {n+1} 2  ^ {-1- 2n}
 +
T _ {n+1}
 +
\left (
 +
 
 +
\frac{2 x - ( b + a ) }{b - a }
 +
 
 +
\right ) ,
 +
$$
  
which deviates least from zero on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194043.png" />, are taken as the nodes. Here <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194044.png" /> is the Chebyshev polynomial of degree <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194045.png" /> (cf. [[Chebyshev polynomials|Chebyshev polynomials]]).
+
which deviates least from zero on $  [ a , b ] $,  
 +
are taken as the nodes. Here $  T _ {n+1} ( z) $
 +
is the Chebyshev polynomial of degree $  n+ 1 $ (cf. [[Chebyshev polynomials|Chebyshev polynomials]]).
  
 
There is a number of other explicit representations of (3) that are more useful for solving this or another practical interpolation problem (cf., e.g., [[Bessel interpolation formula|Bessel interpolation formula]]; [[Gauss interpolation formula|Gauss interpolation formula]]; [[Newton interpolation formula|Newton interpolation formula]]; [[Stirling interpolation formula|Stirling interpolation formula]]; [[Steffensen interpolation formula|Steffensen interpolation formula]]; [[Everett interpolation formula|Everett interpolation formula]]). If it is difficult to estimate in advance the degree of the interpolation polynomial that is necessary for attaining the error desired (e.g., when interpolating a table), then one takes recourse to the [[Aitken scheme|Aitken scheme]]. In this scheme interpolation polynomials of increasing degrees are constructed sequentially, thus making it possible to control the accuracy in the computational process. Another approach to the construction of interpolation formulas can be found in [[Fraser diagram|Fraser diagram]].
 
There is a number of other explicit representations of (3) that are more useful for solving this or another practical interpolation problem (cf., e.g., [[Bessel interpolation formula|Bessel interpolation formula]]; [[Gauss interpolation formula|Gauss interpolation formula]]; [[Newton interpolation formula|Newton interpolation formula]]; [[Stirling interpolation formula|Stirling interpolation formula]]; [[Steffensen interpolation formula|Steffensen interpolation formula]]; [[Everett interpolation formula|Everett interpolation formula]]). If it is difficult to estimate in advance the degree of the interpolation polynomial that is necessary for attaining the error desired (e.g., when interpolating a table), then one takes recourse to the [[Aitken scheme|Aitken scheme]]. In this scheme interpolation polynomials of increasing degrees are constructed sequentially, thus making it possible to control the accuracy in the computational process. Another approach to the construction of interpolation formulas can be found in [[Fraser diagram|Fraser diagram]].
Line 67: Line 168:
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  I.S. Berezin,  N.P. Zhidkov,  "Computing methods" , Pergamon  (1973)  (Translated from Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  N.S. Bakhvalov,  "Numerical methods: analysis, algebra, ordinary differential equations" , MIR  (1977)  (Translated from Russian)</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  I.S. Berezin,  N.P. Zhidkov,  "Computing methods" , Pergamon  (1973)  (Translated from Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  N.S. Bakhvalov,  "Numerical methods: analysis, algebra, ordinary differential equations" , MIR  (1977)  (Translated from Russian)</TD></TR></table>
 
 
  
 
====Comments====
 
====Comments====
 
Many interpolation formulas are contained in [[#References|[a1]]]–[[#References|[a3]]].
 
Many interpolation formulas are contained in [[#References|[a1]]]–[[#References|[a3]]].
  
Consider the interpolation problem of finding a polynomial <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194046.png" /> of degree <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194047.png" /> satisfying the conditions
+
Consider the interpolation problem of finding a polynomial $  P _ {N} $
 +
of degree $  \leq  N $
 +
satisfying the conditions
 +
 
 +
$$ \tag{a1 }
 +
P _ {N}  ^ {(k)} ( x _ {i} )  =  c _ {i,k} ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194048.png" /></td> <td valign="top" style="width:5%;text-align:right;">(a1)</td></tr></table>
+
where the  $  x _ {1}, \dots, x _ {m} $
 +
are  $  m $
 +
distinct knots, and there are precisely  $  N + 1 $
 +
equations in (a1). If for each  $  i $
 +
the orders of the derivatives occurring in (a1) form an unbroken series  $  k = 0, \dots, k _ {i} $,
 +
one has Hermite interpolation. (In case  $  k _ {i} = 0 $
 +
for all  $  i $,
 +
i.e. if no interpolation conditions involving derivatives occur in (a1), one has Lagrange interpolation.) If gaps (lacunae) occur, one speaks of lacunary interpolation or Birkhoff interpolation. The pairs  $  ( i , k ) $
 +
which occur in (a1) are conveniently described in terms of an interpolation matrix  $  E $
 +
of size  $  m \times ( n + 1 ) $,
 +
$  E = ( e _ {i,k} ) $,
 +
$  i = 1, \dots, m $,
 +
$  k = 0, \dots, n $,
 +
where  $  e _ {ik} = 1 $
 +
if  $  ( i , k ) $
 +
does occur in (a1) and  $  e _ {ik} = 0 $
 +
otherwise. The matrix  $  E $
 +
is called regular if (a1) is solvable for all choices of the  $  x _ {i} $
 +
and  $  c _ {i,k} $
 +
and singular otherwise.
  
where the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194049.png" /> are <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194050.png" /> distinct knots, and there are precisely <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194051.png" /> equations in (a1). If for each <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194052.png" /> the orders of the derivatives occurring in (a1) form an unbroken series <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194053.png" />, one has Hermite interpolation. (In case <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194054.png" /> for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194055.png" />, i.e. if no interpolation conditions involving derivatives occur in (a1), one has Lagrange interpolation.) If gaps (lacunae) occur, one speaks of lacunary interpolation or Birkhoff interpolation. The pairs <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194056.png" /> which occur in (a1) are conveniently described in terms of an interpolation matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194057.png" /> of size <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194058.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194059.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194060.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194061.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194062.png" /> if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194063.png" /> does occur in (a1) and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194064.png" /> otherwise. The matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194065.png" /> is called regular if (a1) is solvable for all choices of the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194066.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194067.png" /> and singular otherwise.
+
More generally, let  $  G = \{ g _ {0}, \dots, g _ {N} \} $
 +
be a system of linearly independent  $  n $-times continuously-differentiable real-valued functions on an interval  $  [ a, b] $
 +
or on the circle. Instead of polynomials now consider linear combinations  $  P = \sum _ {j=0}  ^ {N} a _ {j} g _ {j} $.  
 +
A matrix of zeros and ones  $  E = ( e _ {ik} ) $,
 +
i = 1, \dots, m $,
 +
$  k = 0, \dots, n $,
 +
is an interpolation matrix if there are precisely  $  N + 1 $
 +
ones in  $  E $ (and, usually, if there are no rows of zeros in  $  E $;
 +
this means that all knots do occur at least once in an interpolation condition). Let  $  K = \{ x _ {1}, \dots, x _ {m} \} $
 +
be a set of knots, i.e. $  m $
 +
distinct points of the interval or circle. Finally, for each  $  ( i , k ) $
 +
such that  $  e _ {i,k} = 1 $
 +
let there be given a number  $  c _ {i,k} $.  
 +
These data  $  ( G , E , K , c _ {i,k} ) $
 +
define a Birkhoff interpolation problem:
  
More generally, let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194068.png" /> be a system of linearly independent <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194069.png" />-times continuously-differentiable real-valued functions on an interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194070.png" /> or on the circle. Instead of polynomials now consider linear combinations <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194071.png" />. A matrix of zeros and ones <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194072.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194073.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194074.png" />, is an interpolation matrix if there are precisely <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194075.png" /> ones in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194076.png" /> (and, usually, if there are no rows of zeros in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194077.png" />; this means that all knots do occur at least once in an interpolation condition). Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194078.png" /> be a set of knots, i.e. <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194079.png" /> distinct points of the interval or circle. Finally, for each <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194080.png" /> such that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194081.png" /> let there be given a number <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194082.png" />. These data <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194083.png" /> define a Birkhoff interpolation problem:
+
$$ \tag{a2 }
 +
P  ^ {(k)} ( x _ {i} ) = c _ {i,k} \ \
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194084.png" /></td> <td valign="top" style="width:5%;text-align:right;">(a2)</td></tr></table>
+
$$
 +
\textrm{ for  all  }  ( i , k )  \textrm{ such  that  }  e _ {i,k} = 1 .
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194085.png" /></td> </tr></table>
+
The pair  $  ( E , K ) $
 +
is called regular if (a2) is solvable for all choices of the  $  c _ {i,k} $.
  
The pair <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194086.png" /> is called regular if (a2) is solvable for all choices of the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194087.png" />.
+
For each  $  ( i , k ) $
 +
such that  $  e _ {i,k} = 1 $,
 +
consider the row vector of length  $  N + 1 $,
  
For each <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194088.png" /> such that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194089.png" />, consider the row vector of length <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194090.png" />,
+
$$
 +
g _ {0}  ^ {(k)} ( x _ {i} ), \dots, g _ {N}  ^ {(k)} ( x _ {i} ) .
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194091.png" /></td> </tr></table>
+
For varying  $  ( i , k ) $
 +
such that  $  e _ {i,k} = 1 $
 +
one thus finds  $  N + 1 $
 +
row vectors which together make up an  $  ( N + 1 ) \times ( N + 1 ) $
 +
matrix. The pair  $  ( E , K ) $
 +
is regular if and only if this matrix is invertible. Its determinant, where the pairs  $  ( i , k ) $
 +
with  $  e _ {i,k} = 1 $
 +
are ordered lexicographically, is denoted  $  D ( E , K ) $.
  
For varying <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194092.png" /> such that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194093.png" /> one thus finds <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194094.png" /> row vectors which together make up an <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194095.png" /> matrix. The pair <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194096.png" /> is regular if and only if this matrix is invertible. Its determinant, where the pairs <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194097.png" /> with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194098.png" /> are ordered lexicographically, is denoted <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i05194099.png" />.
+
Suppose that the  $  c _ {i,k} $
 +
are the values  $  f ^ { ( k) } ( x _ {i} ) $
 +
of the derivatives of some function  $  f $
 +
at the knots. Then a simple formula for the solution of the interpolation problem (a2) follows from Cramer's rule. Indeed, if $  D  ^ {(j)} ( E , K ) $
 +
denotes the determinant obtained by replacing  $  g _ {j} $
 +
with $  f $
 +
in the formula for  $  D ( E , K ) $,  
 +
then
  
Suppose that the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i051940100.png" /> are the values <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i051940101.png" /> of the derivatives of some function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i051940102.png" /> at the knots. Then a simple formula for the solution of the interpolation problem (a2) follows from Cramer's rule. Indeed, if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i051940103.png" /> denotes the determinant obtained by replacing <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i051940104.png" /> with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i051940105.png" /> in the formula for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i051940106.png" />, then
+
$$ \tag{a3 }
 +
P ( t) = \sum _ { j= 0} ^ { N }
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i051/i051940/i051940107.png" /></td> <td valign="top" style="width:5%;text-align:right;">(a3)</td></tr></table>
+
\frac{D  ^ {(j)} ( E , K ) }{D ( E , K ) }
 +
g _ {j} ( t)
 +
$$
  
 
See also [[Hermite interpolation formula|Hermite interpolation formula]].
 
See also [[Hermite interpolation formula|Hermite interpolation formula]].

Latest revision as of 02:51, 21 March 2022


A formula for the approximate calculation of values of a function $ f ( x) $ by replacing it by a function

$$ g ( x) \equiv g ( x ; a _ {0}, \dots, a _ {n} ) $$

that is simple in a certain sense and belongs to a certain class. The parameters $ a _ {i} $, $ i = 0, \dots, n $, are chosen in such a way that the values of $ g( x) $ coincide with the known values of $ f ( x) $ on a given set of $ n + 1 $ distinct values of the argument:

$$ \tag{1 } g ( x _ {k} ) = f ( x _ {k} ) ,\ \ k = 0, \dots, n . $$

This method of approximately representing a function is called interpolation, and the points $ x _ {k} $ at which (1) should hold are called interpolation nodes. Instead of the simplest condition (1), the values of some quantity related to $ f ( x) $ may also be given, e.g. the values of a derivative of $ f ( x) $ at interpolation nodes.

The method of linear interpolation is the most widespread among the interpolation methods. The approximation is now looked for in the class of (generalized) polynomials

$$ \tag{2 } g ( x ; a _ {0}, \dots, a _ {n} ) = \ \sum _ { i= 0} ^ { n } a _ {i} \phi _ {i} ( x) $$

in some fixed system of functions $ \phi _ {0} ( x), \dots, \phi _ {n} ( x) $. In order for the interpolation polynomial (2) to exist for any function $ f ( x) $ defined on an interval $ [ a , b ] $, and for any choice of $ n + 1 $ nodes $ x _ {0}, \dots, x _ {n} \in [ a , b ] $, $ x _ {i} \neq x _ {j} $ if $ i \neq j $, it is necessary and sufficient that $ \{ \phi _ {i} ( x) \} $ is a Chebyshev system of functions on $ [ a , b ] $. The interpolation polynomial will, moreover, be unique and its coefficients $ a _ {i} $ can be found by directly solving (1).

For $ \{ \phi _ {i} ( x) \} $ one often takes: the sequences of powers of $ x $,

$$ 1 , x , x ^ {2}, \dots $$

the sequence of trigonometric functions,

$$ 1 , \sin x , \cos x ,\ \sin 2 x , \cos 2 x, \dots $$

or the sequence of exponential functions,

$$ 1 , e ^ {\alpha _ {1} x } ,\ e ^ {\alpha _ {2} x }, \dots $$

where $ \{ \alpha _ {i} \} $ is a sequence of distinct real numbers.

When interpolating by algebraic polynomials

$$ \tag{3 } \sum _ { i= 0} ^ { n } a _ {i} x ^ {i} $$

the system $ \{ \phi _ {i} ( x) \} $ is

$$ \tag{4 } \phi _ {i} ( x) = x ^ {i} ,\ \ i = 0, \dots, n , $$

while (1) has the form

$$ \tag{5 } \sum _ { i= 0} ^ { n } a _ {i} x _ {k} ^ {i} = \ f ( x _ {k} ) ,\ \ k = 0, \dots, n . $$

The system (4) is a Chebyshev system, which ensures the existence and uniqueness of the interpolation polynomial (3). A property of (4) is the possibility of obtaining an explicit representation of the interpolation polynomial (3) without immediately having to solve (5). One of the explicit forms of (3),

$$ \tag{6 } g _ {n} ( x) = L _ {n} ( x) = \ \sum _ { i= 0} ^ { n } f ( x _ {i} ) \prod _ {j \neq i } \frac{x - x _ {j} }{x _ {i} - x _ {j} } , $$

is called the Lagrange interpolation polynomial (cf. Lagrange interpolation formula). If the derivative $ f ^ { ( n+ 1) } ( x) $ is continuous, the remainder of (6) can be written as

$$ \tag{7 } f ( x) - g _ {n} ( x) = \ \frac{f ^ { ( n + 1 ) } ( \xi ) }{( n + 1 ) ! } \omega _ {n} ( x) , $$

$$ \xi \in [ y _ {1} , y _ {2} ] ,\ \omega _ {n} ( x) = \prod _ { i= 0} ^ { n } ( x - x _ {i} ) ; $$

where $ y _ {1} = \min ( x _ {0}, \dots, x _ {n} , x ) $; $ y _ {2} = \max ( x _ {0}, \dots, x _ {n} , x) $. The value of the remainder (7) depends, in particular, on the values of $ \omega _ {n} ( x) $. The choice of interpolation nodes for which $ \sup _ {[ a , b ] } | \omega _ {n} ( x) | $ is minimal, is of interest. The distribution of the nodes is optimal in this sense if the roots

$$ x _ {k} = \ \frac{b + a }{2} + \frac{b - a }{2} \ \cos \frac{2 k + 1 }{2 n + 2 } \pi ,\ k = 0, \dots, n , $$

of the polynomial

$$ \overline{T} _ {n+1} ^ { [ a , b ] } ( x) = ( b - a ) ^ {n+1} 2 ^ {-1- 2n} T _ {n+1} \left ( \frac{2 x - ( b + a ) }{b - a } \right ) , $$

which deviates least from zero on $ [ a , b ] $, are taken as the nodes. Here $ T _ {n+1} ( z) $ is the Chebyshev polynomial of degree $ n+ 1 $ (cf. Chebyshev polynomials).

There is a number of other explicit representations of (3) that are more useful for solving this or another practical interpolation problem (cf., e.g., Bessel interpolation formula; Gauss interpolation formula; Newton interpolation formula; Stirling interpolation formula; Steffensen interpolation formula; Everett interpolation formula). If it is difficult to estimate in advance the degree of the interpolation polynomial that is necessary for attaining the error desired (e.g., when interpolating a table), then one takes recourse to the Aitken scheme. In this scheme interpolation polynomials of increasing degrees are constructed sequentially, thus making it possible to control the accuracy in the computational process. Another approach to the construction of interpolation formulas can be found in Fraser diagram.

The Hermite interpolation formula gives the solution to the problem of the algebraic interpolation of the values of a function and its derivatives at interpolation nodes.

References

[1] I.S. Berezin, N.P. Zhidkov, "Computing methods" , Pergamon (1973) (Translated from Russian)
[2] N.S. Bakhvalov, "Numerical methods: analysis, algebra, ordinary differential equations" , MIR (1977) (Translated from Russian)

Comments

Many interpolation formulas are contained in [a1][a3].

Consider the interpolation problem of finding a polynomial $ P _ {N} $ of degree $ \leq N $ satisfying the conditions

$$ \tag{a1 } P _ {N} ^ {(k)} ( x _ {i} ) = c _ {i,k} , $$

where the $ x _ {1}, \dots, x _ {m} $ are $ m $ distinct knots, and there are precisely $ N + 1 $ equations in (a1). If for each $ i $ the orders of the derivatives occurring in (a1) form an unbroken series $ k = 0, \dots, k _ {i} $, one has Hermite interpolation. (In case $ k _ {i} = 0 $ for all $ i $, i.e. if no interpolation conditions involving derivatives occur in (a1), one has Lagrange interpolation.) If gaps (lacunae) occur, one speaks of lacunary interpolation or Birkhoff interpolation. The pairs $ ( i , k ) $ which occur in (a1) are conveniently described in terms of an interpolation matrix $ E $ of size $ m \times ( n + 1 ) $, $ E = ( e _ {i,k} ) $, $ i = 1, \dots, m $, $ k = 0, \dots, n $, where $ e _ {ik} = 1 $ if $ ( i , k ) $ does occur in (a1) and $ e _ {ik} = 0 $ otherwise. The matrix $ E $ is called regular if (a1) is solvable for all choices of the $ x _ {i} $ and $ c _ {i,k} $ and singular otherwise.

More generally, let $ G = \{ g _ {0}, \dots, g _ {N} \} $ be a system of linearly independent $ n $-times continuously-differentiable real-valued functions on an interval $ [ a, b] $ or on the circle. Instead of polynomials now consider linear combinations $ P = \sum _ {j=0} ^ {N} a _ {j} g _ {j} $. A matrix of zeros and ones $ E = ( e _ {ik} ) $, $ i = 1, \dots, m $, $ k = 0, \dots, n $, is an interpolation matrix if there are precisely $ N + 1 $ ones in $ E $ (and, usually, if there are no rows of zeros in $ E $; this means that all knots do occur at least once in an interpolation condition). Let $ K = \{ x _ {1}, \dots, x _ {m} \} $ be a set of knots, i.e. $ m $ distinct points of the interval or circle. Finally, for each $ ( i , k ) $ such that $ e _ {i,k} = 1 $ let there be given a number $ c _ {i,k} $. These data $ ( G , E , K , c _ {i,k} ) $ define a Birkhoff interpolation problem:

$$ \tag{a2 } P ^ {(k)} ( x _ {i} ) = c _ {i,k} \ \ $$

$$ \textrm{ for all } ( i , k ) \textrm{ such that } e _ {i,k} = 1 . $$

The pair $ ( E , K ) $ is called regular if (a2) is solvable for all choices of the $ c _ {i,k} $.

For each $ ( i , k ) $ such that $ e _ {i,k} = 1 $, consider the row vector of length $ N + 1 $,

$$ g _ {0} ^ {(k)} ( x _ {i} ), \dots, g _ {N} ^ {(k)} ( x _ {i} ) . $$

For varying $ ( i , k ) $ such that $ e _ {i,k} = 1 $ one thus finds $ N + 1 $ row vectors which together make up an $ ( N + 1 ) \times ( N + 1 ) $ matrix. The pair $ ( E , K ) $ is regular if and only if this matrix is invertible. Its determinant, where the pairs $ ( i , k ) $ with $ e _ {i,k} = 1 $ are ordered lexicographically, is denoted $ D ( E , K ) $.

Suppose that the $ c _ {i,k} $ are the values $ f ^ { ( k) } ( x _ {i} ) $ of the derivatives of some function $ f $ at the knots. Then a simple formula for the solution of the interpolation problem (a2) follows from Cramer's rule. Indeed, if $ D ^ {(j)} ( E , K ) $ denotes the determinant obtained by replacing $ g _ {j} $ with $ f $ in the formula for $ D ( E , K ) $, then

$$ \tag{a3 } P ( t) = \sum _ { j= 0} ^ { N } \frac{D ^ {(j)} ( E , K ) }{D ( E , K ) } g _ {j} ( t) $$

See also Hermite interpolation formula.

References

[a1] P.J. Davis, "Interpolation and approximation" , Dover, reprint (1975) pp. 108–126
[a2] F.B. Hildebrand, "Introduction to numerical analysis" , McGraw-Hill (1974)
[a3] J.F. Steffenson, "Interpolation" , Chelsea, reprint (1950)
[a4] G.G. Lorentz, K. Jetter, S.D. Riemenschneider, "Birkhoff interpolation" , Addison-Wesley (1983)
How to Cite This Entry:
Interpolation formula. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Interpolation_formula&oldid=14575
This article was adapted from an original article by M.K. Samarin (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article